Thursday, August 3, 2017

OpenStack Trove

trove

Trove Installation

ls
wget http://tarballs.openstack.org/trove/images/ubuntu/mysql.qcow2
mv mysql.qcow2 trove-mysql.qcow2

glance image-create --name "mysql-5.6" --file trove-mysql.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

sudo trove-manage datastore_update mysql ''
Glance_Image_ID=$(glance image-list | awk '/ mysql-5.6 / { print $2 }')
sudo trove-manage datastore_version_update mysql mysql-5.6 mysql ${Glance_Image_ID} '' 1
FLAVOR_ID=$(openstack flavor list | awk '/ m1.small / { print $2 }')
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
cd ../devstack/
ls
tail -f n-cpu.log -n 100
echo $FLAVOR_ID1
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
trove create mysql-instance ${FLAVOR_ID} --size 5 --databases myDB --users user:r00tme --datastore_version mysql-5.6 --datastore mysql
trove list
trove list

Error

Error Message

2016-08-22 16:15:16.712 7651 DEBUG trove.taskmanager.models [-] Successfully created security group for instance: d87625a2-17ac-4bb0-9c50-19ca1fe92084 create_instance /opt/stack/trove/trove/taskmanager/models.py:393
2016-08-22 16:15:16.712 7651 DEBUG trove.taskmanager.models [-] Begin _create_server_volume_individually for id: d87625a2-17ac-4bb0-9c50-19ca1fe92084 _create_server_volume_individually /opt/stack/trove/trove/taskmanager/models.py:783
2016-08-22 16:15:16.713 7651 DEBUG trove.taskmanager.models [-] trove volume support = True _build_volume_info /opt/stack/trove/trove/taskmanager/models.py:811
2016-08-22 16:15:16.713 7651 DEBUG trove.taskmanager.models [-] Begin _create_volume for id: d87625a2-17ac-4bb0-9c50-19ca1fe92084 _create_volume /opt/stack/trove/trove/taskmanager/models.py:844
2016-08-22 16:15:16.713 7651 ERROR trove.taskmanager.models [-] Failed to create volume for instance d87625a2-17ac-4bb0-9c50-19ca1fe92084
Endpoint not found for service_type=volumev2, endpoint_type=publicURL, endpoint_region=RegionOne.
Traceback (most recent call last):
  File "/opt/stack/trove/trove/taskmanager/models.py", line 815, in _build_volume_info
    volume_size, volume_type, datastore_manager)
  File "/opt/stack/trove/trove/taskmanager/models.py", line 845, in _create_volume
    volume_client = create_cinder_client(self.context)
  File "/opt/stack/trove/trove/common/remote.py", line 128, in cinder_client
    endpoint_type=CONF.cinder_endpoint_type)
  File "/opt/stack/trove/trove/common/remote.py", line 71, in get_endpoint
    endpoint_type=endpoint_type)
NoServiceEndpoint: Endpoint not found for service_type=volumev2, endpoint_type=publicURL, endpoint_region=RegionOne.

Trove needs volumev2 from Cinder for a data drive as data storage ?

119 def cinder_client(context):
120     if CONF.cinder_url:
121         url = '%(cinder_url)s%(tenant)s' % {
122             'cinder_url': normalize_url(CONF.cinder_url),
123             'tenant': context.tenant}
124     else:
125         url = get_endpoint(context.service_catalog,
126                            service_type=CONF.cinder_service_type,
127                            endpoint_region=CONF.os_region_name,
128                            endpoint_type=CONF.cinder_endpoint_type)
stack@trove:/etc/trove$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 7ebcc121e88c427a81b509334dd839e4 | trove       | database       |
| 90125dfe6a434ef3b0174cb7248c69f2 | nova_legacy | compute_legacy |
| 9a07a66686fa4e0a89201d98f137a898 | neutron     | network        |
| 9a8a8b2da8104b8c8422d134b2dff319 | nova        | compute        |
| b506135021f64a98899c378cbd47bf5f | keystone    | identity       |
| e0cb6a6687b043db869e5c0e06683d33 | glance      | image          |
+----------------------------------+-------------+----------------+

Hence, we add cinder to local.conf

CINDER_BRANCH=stable/mitaka
# Enable Cinder - Block Storage service for OpenStack
VOLUME_GROUP="cinder-volumes"
enable_service cinder c-api c-vol c-sch c-bak

After that, We can see the volumev2

stack@ubuntu:~/devstack$ openstack service list
+----------------------------------+-------------+----------------+
| ID                               | Name        | Type           |
+----------------------------------+-------------+----------------+
| 23058a3ea403442fb92f602fd4ebb777 | cinderv2    | volumev2       |
| 297f61ee0df84e4f8b49657af3b816cf | nova        | compute        |
| 674ab4b086c64dc8aa51afabc7a8f203 | neutron     | network        |
| 6e506e2ae0c14ca6a605cbf7828f0a1d | cinder      | volume         |
| b961bd89072e4abeabdf7088854f4e55 | glance      | image          |
| ddd741dae5904cd49d26badc8d17e7ef | keystone    | identity       |
| f6ade7c1e3564fa28e5c5c73a181c3a3 | nova_legacy | compute_legacy |
+----------------------------------+-------------+----------------+
[[local|localrc]]
DEST=/opt/stack

ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken
HOST_IP=192.168.140.20

ENABLED_SERVICES=key,rabbit,mysql,horizon
ENABLED_SERVICES+=,n-api,n-crt,n-cpu,n-net,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,g-api,g-reg

# Enable Cinder - Block Storage service for OpenStack
CINDER_BRANCH=stable/mitaka
VOLUME_GROUP="cinder-volumes"
enable_service cinder c-api c-vol c-sch c-bak

# Enabling trove
TROVE_BRANCH=stable/mitaka
enable_plugin trove git://git.openstack.org/openstack/trove stable/mitaka stable/mitaka
enable_plugin trove-dashboard git://git.openstack.org/openstack/trove-dashboard stable/mitaka


# Enabling Neutron (network) Service
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-metering
enable_service neutron

Q_PLUGIN=ml2
#Q_USE_DEBUG_COMMAND=True
if [ "$Q_PLUGIN" = "ml2" ]; then
  #Q_ML2_TENANT_NETWORK_TYPE=gre
  Q_ML2_TENANT_NETWORK_TYPE=vxlan
  :
fi
## Neutron options
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.0.1
PRIVATE_SUBNET_NAME=privateA

PUBLIC_SUBNET_NAME=public-subnet
FLOATING_RANGE=192.168.140.0/24
PUBLIC_NETWORK_GATEWAY=192.168.140.254
##Q_FLOATING_ALLOCATION_POOL=start=192.168.27.102,end=192.168.27.110
PUBLIC_INTERFACE=eth0
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public:br-ex

LIBVIRT_TYPE=qemu

## Enable Trove

ENABLED_SERVICES+=,trove,tr-api,tr-tmgr,tr-cond


IMAGE_URLS="http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz"

SCREEN_LOGDIR=/opt/stack/screen-logs
SYSLOG=True
LOGFILE=~/devstack/stack.sh.log


Q_USE_DEBUG_COMMAND=True

# RECLONE=No
RECLONE=yes
OFFLINE=False

After installing Cinder, we still got the error message.

No valid host was found. There are not enough hosts available.
Code
500
Details
File "/opt/stack/nova/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/opt/stack/nova/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/utils.py", line 372, in wrapped return func(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/opt/stack/nova/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/opt/stack/nova/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result
Created
Aug. 23, 2016, 5:48 a.m.

After we switch to flavor m1.small. It works. We can see the status from Horizon

mysql-instance  mysql-5.6    10.0.0.4 fd1d:6b4e:634a:0:f816:3eff:fea4:f2c2 m1.small  Active nova    None    Running 1 minute    
stack@trove2:~/trove-test$ trove list
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+
| ID                                   | Name           | Datastore | Datastore Version | Status | Flavor ID | Size |
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+
| 0d1cf949-2db9-4d73-8843-fc7a7d279a11 | mysql-instance | mysql     | mysql-5.6         | ERROR  | 3         |    5 |
| f86da618-0d7f-464b-b051-769f1864095e | mysql-instance | mysql     | mysql-5.6         | BUILD  | 2         |    5 |
+--------------------------------------+----------------+-----------+-------------------+--------+-----------+------+

No comments:

Post a Comment