Custom Search

Monday, September 28, 2015

OpenStack libvirtError: internal error: process exited while connecting to monitor could not open disk image rbd:volumes/volume

1)
Error from Compute node during VM creation

$vim /var/log/upstart/nova-compute.log

2015-05-26 18:37:05.006 2575 ERROR nova.compute.manager [req-dc6c398d-8a2a-431e-83e9-500b6eb3c08e 916b42c0b92e44389b31db2423d84f68 1985e4c4b0d7485c8dd2793b0687012b] [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] Instance failed to spawn
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] Traceback (most recent call last):
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1803, in _spawn
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     block_device_info)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2373, in spawn
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     block_device_info)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3787, in _create_domain_and_network
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     power_on=power_on)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3688, in _create_domain
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     domain.XMLDesc(0))
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     six.reraise(self.type_, self.value, self.tb)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3683, in _create_domain
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     domain.createWithFlags(launch_flags)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in doit
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in proxy_call
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     rv = execute(f,*args,**kwargs)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in tworker
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     rv = meth(*args,**kwargs)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/libvirt.py", line 896, in createWithFlags
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] libvirtError: internal error: process exited while connecting to monitor: qemu-system-x86_64: -drive file=rbd:volumes/volume-8219ea23-8412-49ff-b177-8aac8a912c01:id=cinder_volume:key=AQDNkmBV8GkUChAAIEa/cZvSrUQJPpnuZ9Sy5A==:auth_supported=cephx\;none:mon_host=10.140.15.65\:6789\;10.140.15.67\:6789\;10.140.15.68\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=8219ea23-8412-49ff-b177-8aac8a912c01,cache=none: error connecting
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] qemu-system-x86_64: -drive file=rbd:volumes/volume-8219ea23-8412-49ff-b177-8aac8a912c01:id=cinder_volume:key=AQDNkmBV8GkUChAAIEa/cZvSrUQJPpnuZ9Sy5A==:auth_supported=cephx\;none:mon_host=10.140.15.65\:6789\;10.140.15.67\:6789\;10.140.15.68\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=8219ea23-8412-49ff-b177-8aac8a912c01,cache=none: could not open disk image rbd:volumes/volume-8219ea23-8412-49ff-b177-8aac8a912c01:id=cinder_volume:key=AQDNkmBV8GkUChAAIEa/cZvSrUQJPpnuZ9Sy5A==:auth_supported=cephx\;none:mon_host=10.140.15.65\:6789\;10.140.15.67\:6789\;10.140.15.68\:6789: Could not open 'rbd:volumes/volume-82
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]
2015-05-26 18:37:05.008 2575 DEBUG nova.compute.claims [req-dc6c398d-8a2a-431e-83e9-500b6eb3c08e 916b42c0b92e44389b31db2423d84f68 1985e4c4b0d7485c8dd2793b0687012b] [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] Aborting claim: [Claim: 1024 MB memory, 20 GB disk, 1 VCPUS] abort /usr/lib/python2.7/dist-packages/nova/compute/claims.py:113

2)
How to Fix
---------------

a)
Check  "ceph auth list" from storage node

$ceph auth list

osd.0
        key: AQC/wmBVOOeCIRAAjKentYKETCMPR+KeXGuzFA==
        caps: [mon] allow rwx
        caps: [osd] allow *
osd.1
        key: AQBJZGRVwEwuHxAAbOYtK5Jsnx8VoBPYIDhcJA==
        caps: [mon] allow rwx
        caps: [osd] allow *
osd.2
        key: AQAbZ2RVmKjVERAALjtMVv8jZU36KDWxeoSOaQ==
        caps: [mon] allow rwx
        caps: [osd] allow *
client.admin
        key: AQDjY2RVALMNJhAA9apjowOWhjxYfn6IjMTMaA==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *
client.cinder-backup
        key: AQATZWRVkBUNBxAA30x22DBwIKegZd1dnzQ4hg==
        caps: [mon] allow r
        caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=backups
client.cinder_volume
        key: AQBYZGRVoKgZAhAAd//siax98kwZpIf7ClttGQ==
        caps: [mon] allow r
        caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes, allow rx pool=images
client.glance
        key: AQCTZGRVOAC4FBAApAOo5UyL/MDk01yTLxhkHQ==
        caps: [mon] allow r
        caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
client.radosgw.gateway
        key: AQAPZGRV0F+vERAALUj3KJ/uMQf6mv90HVIXiQ==
        caps: [mon] allow rw
        caps: [osd] allow rwx

b)
In Compute Node
This key should match with cinder_volume key in the storage node

$cat /etc/libvirt/secrets/26f3bfcf-7acd-40ec-948d-62b12cd14901.base64
AQDNkmBV8GkUChAAIEa/cZvSrUQJPpnuZ9Sy5A==

* Note: Here, secrets holds wrong key. So we need to change it.  == Issue

c)
In Compute Node
This key should match with cinder_volume key in the storage node

$cat /etc/ceph/keyring.ceph.client.cinder_volume
[client.cinder_volume]
        key = AQBYZGRVoKgZAhAAd//siax98kwZpIf7ClttGQ==

d) 
In Compute Node restart libvirtd

$service libvirt-bin restart

3)
Done, Try to Create VM again. It should work.






1 comment: