Custom Search

Tuesday, September 29, 2015

How to OpenContrail Check Compute node's peer with Controller Node

How to Contrail Check Compute node's peer with Controller Node

1)
Method-1:


Check Compute Agent's Xmpp Connection Status.
It should show an entry which contains IP of Controller node.

http://ip-of-compute-node:8085/Snh_AgentXmppConnectionStatusReq

Example:

Here IP of compute nodes 10.140.15.57 and 10.140.15.59
Here IP of controller node is 10.140.15.60





2)
Method-2:


Check Bgp Neighbor List.
It should contains entry for all Compute nodes.

http://ip-of-contrail-controller-node:8083/Snh_BgpNeighborReq

Example:

* Here IP of Compute nodes which are peered with controller is 10.140.15.57 and 10.140.15.59
* Here IP of controller node is 10.140.15.60
* I have created a VM in cp1 node and for that VM private IP assigned from "net_tempest" network and floating IP assigned from "public" network.So we can see routing instance of these two networks here (see the last picture).








3)
If you are not seeing entries in above urls, then check for "discovery server" IP in contrail-vrouter-agent.conf

$vim /etc/contrail/contrail-vrouter-agent.conf

# IP address of discovery server
server=10.140.15.78


4)






Monday, September 28, 2015

OpenStack libvirtError: internal error: process exited while connecting to monitor could not open disk image rbd:volumes/volume

1)
Error from Compute node during VM creation

$vim /var/log/upstart/nova-compute.log

2015-05-26 18:37:05.006 2575 ERROR nova.compute.manager [req-dc6c398d-8a2a-431e-83e9-500b6eb3c08e 916b42c0b92e44389b31db2423d84f68 1985e4c4b0d7485c8dd2793b0687012b] [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] Instance failed to spawn
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] Traceback (most recent call last):
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1803, in _spawn
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     block_device_info)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2373, in spawn
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     block_device_info)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3787, in _create_domain_and_network
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     power_on=power_on)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3688, in _create_domain
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     domain.XMLDesc(0))
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     six.reraise(self.type_, self.value, self.tb)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3683, in _create_domain
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     domain.createWithFlags(launch_flags)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in doit
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in proxy_call
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     rv = execute(f,*args,**kwargs)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in tworker
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     rv = meth(*args,**kwargs)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]   File "/usr/lib/python2.7/dist-packages/libvirt.py", line 896, in createWithFlags
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] libvirtError: internal error: process exited while connecting to monitor: qemu-system-x86_64: -drive file=rbd:volumes/volume-8219ea23-8412-49ff-b177-8aac8a912c01:id=cinder_volume:key=AQDNkmBV8GkUChAAIEa/cZvSrUQJPpnuZ9Sy5A==:auth_supported=cephx\;none:mon_host=10.140.15.65\:6789\;10.140.15.67\:6789\;10.140.15.68\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=8219ea23-8412-49ff-b177-8aac8a912c01,cache=none: error connecting
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] qemu-system-x86_64: -drive file=rbd:volumes/volume-8219ea23-8412-49ff-b177-8aac8a912c01:id=cinder_volume:key=AQDNkmBV8GkUChAAIEa/cZvSrUQJPpnuZ9Sy5A==:auth_supported=cephx\;none:mon_host=10.140.15.65\:6789\;10.140.15.67\:6789\;10.140.15.68\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=8219ea23-8412-49ff-b177-8aac8a912c01,cache=none: could not open disk image rbd:volumes/volume-8219ea23-8412-49ff-b177-8aac8a912c01:id=cinder_volume:key=AQDNkmBV8GkUChAAIEa/cZvSrUQJPpnuZ9Sy5A==:auth_supported=cephx\;none:mon_host=10.140.15.65\:6789\;10.140.15.67\:6789\;10.140.15.68\:6789: Could not open 'rbd:volumes/volume-82
2015-05-26 18:37:05.006 2575 TRACE nova.compute.manager [instance: f00baff6-4093-47f0-b51d-6c73c65983b9]
2015-05-26 18:37:05.008 2575 DEBUG nova.compute.claims [req-dc6c398d-8a2a-431e-83e9-500b6eb3c08e 916b42c0b92e44389b31db2423d84f68 1985e4c4b0d7485c8dd2793b0687012b] [instance: f00baff6-4093-47f0-b51d-6c73c65983b9] Aborting claim: [Claim: 1024 MB memory, 20 GB disk, 1 VCPUS] abort /usr/lib/python2.7/dist-packages/nova/compute/claims.py:113

2)
How to Fix
---------------

a)
Check  "ceph auth list" from storage node

$ceph auth list

osd.0
        key: AQC/wmBVOOeCIRAAjKentYKETCMPR+KeXGuzFA==
        caps: [mon] allow rwx
        caps: [osd] allow *
osd.1
        key: AQBJZGRVwEwuHxAAbOYtK5Jsnx8VoBPYIDhcJA==
        caps: [mon] allow rwx
        caps: [osd] allow *
osd.2
        key: AQAbZ2RVmKjVERAALjtMVv8jZU36KDWxeoSOaQ==
        caps: [mon] allow rwx
        caps: [osd] allow *
client.admin
        key: AQDjY2RVALMNJhAA9apjowOWhjxYfn6IjMTMaA==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *
client.cinder-backup
        key: AQATZWRVkBUNBxAA30x22DBwIKegZd1dnzQ4hg==
        caps: [mon] allow r
        caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=backups
client.cinder_volume
        key: AQBYZGRVoKgZAhAAd//siax98kwZpIf7ClttGQ==
        caps: [mon] allow r
        caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes, allow rx pool=images
client.glance
        key: AQCTZGRVOAC4FBAApAOo5UyL/MDk01yTLxhkHQ==
        caps: [mon] allow r
        caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
client.radosgw.gateway
        key: AQAPZGRV0F+vERAALUj3KJ/uMQf6mv90HVIXiQ==
        caps: [mon] allow rw
        caps: [osd] allow rwx

b)
In Compute Node
This key should match with cinder_volume key in the storage node

$cat /etc/libvirt/secrets/26f3bfcf-7acd-40ec-948d-62b12cd14901.base64
AQDNkmBV8GkUChAAIEa/cZvSrUQJPpnuZ9Sy5A==

* Note: Here, secrets holds wrong key. So we need to change it.  == Issue

c)
In Compute Node
This key should match with cinder_volume key in the storage node

$cat /etc/ceph/keyring.ceph.client.cinder_volume
[client.cinder_volume]
        key = AQBYZGRVoKgZAhAAd//siax98kwZpIf7ClttGQ==

d) 
In Compute Node restart libvirtd

$service libvirt-bin restart

3)
Done, Try to Create VM again. It should work.






How to run apt-get install through proxy

1)
$export http_proxy='http://10.110.192.14:3128/'
$export https_proxy='http://10.110.192.14:3128/'

2)
$sudo http_proxy=$http_proxy apt-get install gdb



Wednesday, September 23, 2015

Ubuntu Linux Convert IST time to UTC time

1)
Print Time in IST
$ date
Wed Sep 23 23:54:54 IST 2015

2)
Print/Convert IST time to UTC time
$ date -u
Wed Sep 23 18:24:59 UTC 2015

3)
Change Time
$date --set "Wed Sep 23 18:22:01 UTC 2015"
Wed Sep 23 18:22:01 UTC 2015

4)
To change time zone
$sudo dpkg-reconfigure tzdata

Friday, September 18, 2015

How to Linux show command output on terminal and save to a file at the same time

1)
Howto Linux print on terminal and write to a file simultaneously

$tail -f /var/log/upstart/ironic-api.log 2>&1 | tee a.txt

2)
Run command in watch mode and append output to a file and print in terminal

$watch "myscript.sh 2>&1 | tee -a mylog.txt"



Wednesday, September 16, 2015

OpenStack Neutron Clear delete host-routes

OpenStack Neutron delete host-routes

$neutron subnet-update 49ed150b-6d19-4dae-81dc-16813ab57c99 --host-routes action=clear



Thursday, September 10, 2015

How to debug Multithreaded Python Application with gdb


https://wiki.python.org/moin/DebuggingWithGdb

1)
Install gdb and Python debugging extensions
$ sudo apt-get install gdb python2.7-dbg

2)
If the process is already running, you can attach to it provided you know the process ID.
$ gdb python [process id]

Attaching to a running process like this will cause it to stop. You can tell it to continue running with c command.

3)
If the hang occurs in some thread, the following commands may be handy:
(gdb) info threads

Current thread is marked with *. To see where it is in Python code, use py-list:
(gdb) py-list

4)
Example:
https://code.google.com/p/spyderlib/wiki/HowToDebugDeadlock

http://fedoraproject.org/wiki/Features/EasierPythonDebugging


Tuesday, September 8, 2015

VBoxManage list all dhcpservers

saju@saju-Inspiron-5521:~$ VBoxManage list dhcpservers
NetworkName:    HostInterfaceNetworking-vboxnet0
IP:             0.0.0.0
NetworkMask:    0.0.0.0
lowerIPAddress: 0.0.0.0
upperIPAddress: 0.0.0.0
Enabled:        No

NetworkName:    HostInterfaceNetworking-vboxnet1

IP:             0.0.0.0
NetworkMask:    0.0.0.0
lowerIPAddress: 0.0.0.0
upperIPAddress: 0.0.0.0
Enabled:        No

NetworkName:    HostInterfaceNetworking-vboxnet2
IP:             0.0.0.0
NetworkMask:    0.0.0.0
lowerIPAddress: 0.0.0.0
upperIPAddress: 0.0.0.0
Enabled:        No

saju@saju-Inspiron-5521:~$

How do I find DHCP Server IP Address

1)
saju@ubuntu:~$ cd /var/lib/dhcp/
saju@ubuntu:/var/lib/dhcp$

2)
saju@ubuntu:/var/lib/dhcp$ grep -r dhcp-server-identifier .
./dhclient.eth2.leases:  option dhcp-server-identifier 192.168.56.100;
./dhclient.eth1.leases:  option dhcp-server-identifier 192.168.56.100;
./dhclient.eth0.leases:  option dhcp-server-identifier 10.0.2.2;
./dhclient.eth0.leases:  option dhcp-server-identifier 10.0.2.2;
saju@ubuntu:/var/lib/dhcp$

* Result is from VirtualBox VM
* 10.0.2.2 is the IP of VirtualBox DHCP Server.



Monday, September 7, 2015

OpenStack List All VM Instances from All Tenants


$nova list --all-tenants

OpenStack List all VM Instances In a Tenant by specifying tenant id


$nova list --tenant [tennat-id] --all-tenants
$nova list --tenant 111fabdd5c5b4b088512b21a5772222 --all-tenants


Wednesday, September 2, 2015

OpenStack Nova list --tenant


$nova list --tenant 111fabdd5c5b4b088512b21a5772222 --all-tenants