Custom Search

Sunday, November 30, 2014

OpenStack Python Client Unit Testing mocked self.client.httpclient.request

1)
https://github.com/openstack/python-neutronclient/blob/master/neutronclient/tests/unit/test_cli20.py

def _test_create_resource(--):
    self.mox.StubOutWithMock(cmd, "get_client")
    self.mox.StubOutWithMock(self.client.httpclient, "request")
    cmd.get_client().MultipleTimes().AndReturn(self.client)

    self.client.httpclient.request(
        end_url(path, format=self.format), 'POST',
        body=mox_body,
        headers=mox.ContainsKeyValue(
        'X-Auth-Token', TOKEN)).AndReturn((MyResp(200), resstr)) <====

2)
https://github.com/openstack/python-neutronclient/blob/master/neutronclient/client.py

def do_request(self, url, method, **kwargs):
    resp, body = self._cs_request(self.endpoint_url + url, method,
                                  **kwargs)

def _cs_request(self, *args, **kwargs):

    #raise Exception(type(self.request)) <==
    resp, body = self.request(*args, **kargs) <=== self.client.httpclient.request




OpenStack Unit Testing With Tox, run_tests.sh and testr

OpenStack python-neutronclient Unit Testing With Tox, run_tests.sh and testr

https://github.com/openstack/python-neutronclient/

1)

Clone python-neutronclient
#git clone https://github.com/openstack/python-neutronclient.git

2)
Some useful links
https://github.com/openstack/python-neutronclient/blob/master/HACKING.rst

http://docs.openstack.org/developer/hacking/ <=== IMP

3)
Tox and testr configs

https://wiki.openstack.org/wiki/Testr

https://github.com/openstack/python-neutronclient/blob/master/.testr.conf

https://github.com/openstack/python-neutronclient/blob/master/tox.ini

https://github.com/openstack/python-neutronclient/blob/master/test-requirements.txt


https://github.com/openstack/python-neutronclient/blob/master/requirements.txt

4)
What is testr, tox and run_tests.sh ??


a)
Test repository (testr) is a small application for tracking test results.
testr is configured via the ‘.testr.conf’ file which needs to be in the same directory that testr is run from.

http://testrepository.readthedocs.org/en/latest/MANUAL.html

b)
Tox aims to automate and standardize testing in Python.
Tox is a generic virtualenv management and test command line tool you can use for:
* checking your package installs correctly with different Python versions and interpreters.
* Running your tests in each of the environments, configuring your test tool of choice.
* Acting as a frontend to Continuous Integration servers, greatly reducing boilerplate and merging CI and shell-based testing.

Install tox "pip install tox" and put basic information about your project and the test environments you want your project to run in into a tox.ini file.You can also try generating a tox.ini file automatically, by running tox-quickstart and then answering a few simple questions.

http://tox.readthedocs.org/en/latest/
http://tox.readthedocs.org/en/latest/examples.html

c)
Actually "tox" and "run_tests.sh" are wrappers to use "testr".

5)
Install and run tox to setup test env and test your project

#cd python-neutronclient
#sudo pip install tox


To install and setup all environments defined in the "tox.ini" file.
#tox

* "tox" command will read "python-neutronclient/tox.ini" and setup test env and test the project.
* "tox" command also read "python-neutronclient/test-requirements.txt" to setup test env, see "tox.ini" file.
* This "tox" command will creates hidden files like ".testrepository" and ".tox".
* List all environment folders in the ".tox" hidden folder "#ls .tox"
* This command setup all environments defined in the "tox.ini" file and install all dependencies defined in the "requirements.txt" and "test-requirements.txt".
* https://wiki.openstack.org/wiki/Testr

6)
To install and setup python 2.7 (py27) envirtonment

#tox -epy27
* Don't manually activate virtualenv and execute this "tox" command, since it recreate the virtualenv at that case.

OR

Use "-r" option to recreate the environment from scratch and install all dependencies defined in the "requirements.txt" and "test-requirements.txt".
##tox -epy27 -r

* Under ".tox" hidden folder, you can see the environment directory "py27". Also check "tox.ini" file for configuration.
* Check the log file "#tail -f .tox/py27/log/py27-1.log" in the hidden dir to see what happens during the environment setup.

7)
List all test environments

#tox -l
py26
py27
py33
pypy
pep8

8)
show configuration information for all environments

#tox --showconfig
* This command read from "tox.ini" file.

9)
How can I run just one test ??.


Method-1):

Using Tox
a)
Deactivate virtualenv

b)
Run Test
#tox -epy27 -- neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network
#tox -epy27 -- neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON
#tox -epy27 -- neutronclient.tests.unit.test_cli20_network
#tox -epy27 -- neutronclient.tests
#tox -epy27


* Don't manually activate virtualenv and execute this "tox" command, since it recreate the virtualenv at that case.

Method-2):


Using testr
a)
Activate virtualenv
#source .tox/py27/bin/activate

b)
Run Test
#testr run neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network
#testr run neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON
#testr run neutronclient.tests.unit.test_cli20_network
#testr run neutronclient.tests
#testr run


c)
Run parallel Test
#testr run --parallel neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network
#testr run --parallel neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON
#testr run --parallel neutronclient.tests.unit.test_cli20_network
#testr run --parallel neutronclient.tests
#testr run --parallel


10)
Run pep8 test


a)
Using Tox


* Deactivate virtualenv

* Run only pep8 Test
#tox -e pep8 -- neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network
#tox -e pep8 -- neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON
#tox -e pep8 -- neutronclient.tests.unit.test_cli20_network
#tox -e pep8 -- neutronclient.tests
#tox -e pep8


* Run py27 and pep8 Test
#tox -e py27,pep8 -- neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network
#tox -e py27,pep8 -- neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON
#tox -e py27,pep8 -- neutronclient.tests.unit.test_cli20_network
#tox -e py27,pep8 -- neutronclient.tests
#tox -e py27,pep8


http://docs.openstack.org/developer/ceilometer/contributing/source.html

b)
Using flake8


* Activate virtualenv
#source .tox/py27/bin/activate
* Run pep8 test
#flake8

11)
testr helps


#testr commands
#testr help run


12)
Run only the tests in the CLITestV20NetworkJSON class
#tox -epy27 -- '(CLITestV20NetworkJSON)'
Run only the tests in the CLITestV20NetworkJSON and CLITestV20FloatingIpsJSON classes
#tox -epy27 -- '(CLITestV20NetworkJSON|CLITestV20FloatingIpsJSON)'
Run only the test "test_create_network" in the CLITestV20NetworkJSON class
#tox -epy27 -- '(CLITestV20NetworkJSON.test_create_network)'
Run only the tests "test_create_network" and "test_create_floatingip" in the CLITestV20NetworkJSON and CLITestV20FloatingIpsJSON classes
#tox -epy27 -- '(CLITestV20NetworkJSON.test_create_network|CLITestV20FloatingIpsJSON.test_create_floatingip)'

13)
run only particular test of python-tackerclient project

$ python -m testtools.run tackerclient.tests.unit.test_validators.ValidatorTest

$ python -m testtools.run tackerclient.tests.unit.test_validators.ValidatorTest.test_validate_ip_subnet

14)


Saturday, November 29, 2014

Python pip install from local cache

1)
Enable download_cache for pip.

Create a configuration file named ~/.pip/pip.conf, and add the following contents:
[global]
download_cache = ~/.cache/pip

OR

In one commeand
#printf '[global]\ndownload_cache = ~/.cache/pip\n' >> ~/.pip/pip.conf



2)
Install a package
#pip install six

* Add the package "six" to cache and install

3)
Check the files in the cache
#ls -lsh ~/.cache/pip/

4)
Install the same package again and you can see that it takes from the cache
#pip install six

* Get the package "six" from cache and install

5)
You can also manually add packages to pip cache

a)
Download the package https://pypi.python.org/packages/source/B/Babel/Babel-1.3.tar.gz#md5=5264ceb02717843cbc9ffce8e6e06bdb

b)
Copy the downloaded "Babel-1.3.tar.gz" file to "~/.cache/pip/" folder and rename ot complete url path.
#mv Babel-1.3.tar.gz https://pypi.python.org/packages/source/B/Babel/Babel-1.3.tar.gz

c)

Modify the name
Replace "/" with %2F
Repalce ":" with %3A
Like : https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2FB%2FBabel%2FBabel-1.3.tar.gz

d)
Create another file with above name and extension ".content-type" and add the content "application/octet-stream".

Example:
#vim https%3A%2F%2Fpypi.python.org%2Fpackages%2Fsource%2Fm%2Fmock%2Fmock-1.0.1.tar.gz.content-type

application/octet-stream



Wednesday, November 26, 2014

Python mox mock Unit Test StubOutWithMock example

1)

import mox

class MyRequestHandler(object):
    def authenticate(self, request):
        print "===in authenticate==="


handler = MyRequestHandler()

#####1) Start recording (put mock in record mode) #####
m = mox.Mox()
m.StubOutWithMock(handler, "authenticate")#stubout the method 'authenticate' and replace with a mock object named 'authenticate'


#####2) Stop recording (put mock objects in reply mode) #####
m.ReplayAll()



#####3) Verify that we played all recorded things #####
m.VerifyAll()



#####4) UnsetStubs #####
m.UnsetStubs()






2)

import mox

class MyRequestHandler(object):
    def handle_request(self, request):
        self.authenticate(request)
        self.authorize(request)
        self.process(request)

    def authenticate(self, request):
        print "===in authenticate==="

    def authorize(self, request):
        print "===in authorize==="

    def process(self, request):
        print "===in process==="

    def test_method1(sefi, request):
        print "===in test_method1==="

handler = MyRequestHandler()

#####1) Start recording (put mock in record mode) #####
m = mox.Mox()
m.StubOutWithMock(handler, "authenticate")#stubout the method 'authenticate' and replace with a mock object named 'authenticate'
m.StubOutWithMock(handler, "authorize")#stubout the method 'authorize' and replace with a mock object named 'authorize'

handler.authenticate(mox.IsA(int))#record the behavior you expect by calling the expected methods on the mock object
handler.authorize(mox.IsA(int))#record the behavior you expect by calling the expected methods on the mock object

handler.authorize(mox.IsA(int)).AndReturn("\nReturn me when calling 'authorize' method \n\n")

handler.authorize(mox.IsA(int)).AndRaise(Exception("Got error \n\n"))

handler.authorize("saju").AndReturn("\nCalled me with string argument 'saju'\n\n")


#####2) Stop recording (put mock objects in reply mode) and start testing #####
m.ReplayAll()

handler.authenticate(1)#Testing method call with int value
handler.authorize(1)#Testing method call with int value

print handler.authorize(1)#Testing the return

try:
    handler.authorize(1)#testing the exception raise, we should add try, catch here, otherwise execution stop here
except Exception as ex:
    print ex
    pass

print "\n\n---here---"
print handler.authorize('saju')#Testing the method with string argument 'saju'


#####3) Verify that we played all recorded things #####
m.VerifyAll()#verify that all recorded (expected) interactions occurred


#####4) UnsetStubs #####

m.UnsetStubs()




Python mox mock unittest MultipleTimes example

1)
Without MultipleTimes

class Time:
    def get_time():
        pass


import mox

## Start recording (start record mode)
t=mox.MockObject(Time)

t.get_time().AndReturn("you can call me only once")


## Stop recording (start reply mode)

mox.Replay(t)

t.get_time()

t.get_time()


* Here You can call the method "get_time" only once 




2)
With MultipleTimes

class Time:
    def get_time():
        pass

import mox

t=mox.MockObject(Time)

t.get_time().MultipleTimes().AndReturn("you can call me multiple times")

mox.Replay(t)

t.get_time()

t.get_time()

* Here You can call the method "get_time" multiple times



Tuesday, November 25, 2014

OpenStack Python Client How to add new CLI command

1)
Clone python-neutronclient
#git clone https://github.com/openstack/python-neutronclient.git
#cd python-neutronclient
#git checkout 2.3.6


2)
Export the credentials
export OS_USERNAME=admin
export OS_PASSWORD=secret123
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://127.0.0.1:35357/v2.0


3)
Run the CLI command from cloned dir

#python neutron net-list
Success: Working

4)
Try to execute a command which is not implemented yet
#python neutron ipam-list
Error: Unknown command [u'ipam-list']

5)
Create command to class Mapping for our new command "ipam-list"


#vim python-neutronclient/neutronclient/shell.py

* Add the name of the command and class (command to class map) to the COMMAND_V2 dictionary

Example:
--------------
COMMAND_V2 = {
    .... ....
    'ipam-list':None,
}


COMMANDS = {'2.0': COMMAND_V2}


* Then try to run the new command "ipam-list"
#python neutron ipam-list
Error: 'NoneType' object is not callable

6)
Create a file named "ipam.py" under python-neutronclient/neutronclient/neutron/v2_0/ and add following lines

import logging
from neutronclient.neutron.v2_0 import ListCommand

class ListIpam(ListCommand):
    resource = 'ipam'
    log = logging.getLogger(__name__ + '.ListIpam')
    _formatters = {}
    list_columns = ['id', 'name']


7)
Map the command "ipam-list" to new class "ListIpam"


#vim python-neutronclient/neutronclient/shell.py


from neutronclient.neutron.v2_0 import ipam

COMMAND_V2 = {
    .... ....
    'ipam-list':ipam.ListIpam,
}


* Then try to run the command
#python neutron ipam-list
Error: 'Client' object has no attribute 'list_ipams'

8)
Add method in client class
#vim python-neutronclient/neutronclient/v2_0/client.py
This file is responsible to generate REST API requests and send the same requests to your plugin running on neutron-server.
Add following codes

class Client(object):
    .... ....
    ipams_path = "/ipams"
   
    @APIParamsCall
    def list_ipams(self, **_params):
        return self.get(self.ipams_path, params=_params)


* Then try to run the new command
#python neutron ipam-list
Sucess: Working

10)
Diff

10,a)
$ git diff
diff --git a/neutronclient/shell.py b/neutronclient/shell.py
index f1f2e2e..bda3e12 100644
--- a/neutronclient/shell.py
+++ b/neutronclient/shell.py
@@ -61,6 +61,7 @@ from neutronclient.neutron.v2_0.vpn import ikepolicy
 from neutronclient.neutron.v2_0.vpn import ipsec_site_connection
 from neutronclient.neutron.v2_0.vpn import ipsecpolicy
 from neutronclient.neutron.v2_0.vpn import vpnservice
+from neutronclient.neutron.v2_0 import ipam
 from neutronclient.openstack.common.gettextutils import _
 from neutronclient.openstack.common import strutils
 from neutronclient.version import __version__
@@ -277,6 +278,7 @@ COMMAND_V2 = {
     'nec-packet-filter-create': packetfilter.CreatePacketFilter,
     'nec-packet-filter-update': packetfilter.UpdatePacketFilter,
     'nec-packet-filter-delete': packetfilter.DeletePacketFilter,
+    'ipam-list': ipam.ListIpam,
 }

 COMMANDS = {'2.0': COMMAND_V2}
diff --git a/neutronclient/v2_0/client.py b/neutronclient/v2_0/client.py
index a102781..6b2e972 100644
--- a/neutronclient/v2_0/client.py
+++ b/neutronclient/v2_0/client.py
@@ -221,6 +221,7 @@ class Client(object):
     firewall_path = "/fw/firewalls/%s"
     net_partitions_path = "/net-partitions"
     net_partition_path = "/net-partitions/%s"
+    ipams_path = "/ipams"

     # API has no way to report plurals, so we have to hard code them
     EXTED_PLURALS = {'routers': 'router',
@@ -1187,6 +1188,14 @@ class Client(object):
         """Delete the specified packet filter."""
         return self.delete(self.packet_filter_path % packet_filter_id)

+    @APIParamsCall
+    def list_ipams(self, **_params):
+        """
+        Fetches a list of all ipams for a tenant
+        """
+        # Pass filters in "params" argument to do_request
+        return self.get(self.ipams_path, params=_params)
+
     def __init__(self, **kwargs):
         """Initialize a new client for the Neutron v2.0 API."""
         super(Client, self).__init__()
 

10,b)
#cat neutronclient/neutron/v2_0/ipam.py

import logging
from neutronclient.neutron.v2_0 import ListCommand

class ListIpam(ListCommand):
    resource = 'ipam'
    log = logging.getLogger(__name__ + '.ListIpam')
    _formatters = {}
    list_columns = ['id', 'name']


11)
Ref: http://control-that-vm.blogspot.in/2014/06/writing-cli-commands-for-neutronclient.html



Monday, November 24, 2014

AttributeError: 'NeutronPluginContrailCoreV2' object has no attribute 'get_ipams'

Fix
===



ERROR
======
* Neutron extension command failed

#neutron ipam-list
Request Failed: internal server error while processing your request.

#saju@myuuhost:~/neutron$ sudo python neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/opencontrail/ContrailPlugin.ini

/usr/lib/python2.7/dist-packages/eventlet/hubs/__init__.py:8: UserWarning: Module neutron was already imported from /home/saju/neutron/neutron/__init__.pyc, but /usr/lib/python2.7/dist-packages is being added to sys.path
  import pkg_resources
loc:===> /home/saju/neutron/neutron/__init__.pyc
2014-11-23 13:24:20.638    ERROR [neutron.api.extensions] Extension path 'extensions' doesn't exist!
2014-11-23 13:24:20.641  WARNING [neutron.api.extensions] Extension contrail not supported by any of loaded plugins
2014-11-23 13:24:20.644  WARNING [neutron.api.extensions] Extension policy not supported by any of loaded plugins
2014-11-23 13:24:20.646  WARNING [neutron.api.extensions] Extension route-table not supported by any of loaded plugins
2014-11-23 13:24:20.652  WARNING [neutron.api.extensions] Extension allowed-address-pairs not supported by any of loaded plugins
2014-11-23 13:24:20.655  WARNING [neutron.api.extensions] Extension dhcp_agent_scheduler not supported by any of loaded plugins
2014-11-23 13:24:20.659  WARNING [neutron.api.extensions] Extension extra_dhcp_opt not supported by any of loaded plugins
2014-11-23 13:24:20.662  WARNING [neutron.api.extensions] Extension extraroute not supported by any of loaded plugins
2014-11-23 13:24:20.672  WARNING [neutron.api.extensions] Extension fwaas not supported by any of loaded plugins
2014-11-23 13:24:20.675  WARNING [neutron.api.extensions] Extension flavor not supported by any of loaded plugins
2014-11-23 13:24:20.680  WARNING [neutron.api.extensions] Extension ext-gw-mode not supported by any of loaded plugins
2014-11-23 13:24:20.684  WARNING [neutron.api.extensions] Extension l3_agent_scheduler not supported by any of loaded plugins
2014-11-23 13:24:20.690  WARNING [neutron.api.extensions] Extension lbaas_agent_scheduler not supported by any of loaded plugins
2014-11-23 13:24:20.697  WARNING [neutron.api.extensions] Extension lbaas not supported by any of loaded plugins
2014-11-23 13:24:20.700  WARNING [neutron.api.extensions] Extension metering not supported by any of loaded plugins
2014-11-23 13:24:20.703  WARNING [neutron.api.extensions] Extension multi-provider not supported by any of loaded plugins
2014-11-23 13:24:20.709  WARNING [neutron.api.extensions] Extension provider not supported by any of loaded plugins
2014-11-23 13:24:20.713  WARNING [neutron.api.extensions] Extension routed-service-insertion not supported by any of loaded plugins
2014-11-23 13:24:20.714  WARNING [neutron.api.extensions] Extension router-service-type not supported by any of loaded plugins
2014-11-23 13:24:20.723  WARNING [neutron.api.extensions] Extension service-type not supported by any of loaded plugins
2014-11-23 13:24:20.729  WARNING [neutron.api.extensions] Extension vpnaas not supported by any of loaded plugins
2014-11-23 13:24:20.824  WARNING [keystoneclient.middleware.auth_token] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint
2014-11-23 13:24:20.825  WARNING [keystoneclient.middleware.auth_token] signing_dir is not owned by 0
2014-11-23 13:24:34.330    ERROR [neutron.api.v2.resource] index failed
Traceback (most recent call last):
  File "/home/saju/neutron/neutron/api/v2/resource.py", line 87, in resource
    result = method(request=request, **args)
  File "/home/saju/neutron/neutron/api/v2/base.py", line 304, in index
    return self._items(request, True, parent_id)
  File "/home/saju/neutron/neutron/api/v2/base.py", line 241, in _items
    obj_getter = getattr(self._plugin, self._plugin_handlers[self.LIST])
AttributeError: 'NeutronPluginContrailCoreV2' object has no attribute 'get_ipams'







WARNING [neutron.api.extensions] Extension ipam not supported by any of loaded plugins

Fix
===
Extension not loading and seeing WARNING "[neutron.api.extensions] Extension ipam not supported by any of loaded plugins" while starting the "neutron-server". 

* Add the name of the extension in class variable "supported_extension_aliases" in the plugin class

Example:
---------------
class NeutronPluginContrailCoreV2(neutron_plugin_base_v2.NeutronPluginBaseV2,
                                  securitygroup.SecurityGroupPluginBase,
                                  portbindings_base.PortBindingBaseMixin,
                                  external_net.External_net):

    supported_extension_aliases = ["security-group", "router",
                                   "port-security", "binding", "agent",
                                   "quotas", "external-net", "ipam"]



ERROR
======

#sudo python neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/opencontrail/ContrailPlugin.ini

/usr/lib/python2.7/dist-packages/eventlet/hubs/__init__.py:8: UserWarning: Module neutron was already imported from /home/saju/neutron/neutron/__init__.pyc, but /usr/lib/python2.7/dist-packages is being added to sys.path
  import pkg_resources
loc:===> /home/saju/neutron/neutron/__init__.pyc
2014-11-23 12:54:47.984    ERROR [neutron.api.extensions] Extension path 'extensions' doesn't exist!
2014-11-23 12:54:47.987  WARNING [neutron.api.extensions] Extension contrail not supported by any of loaded plugins
2014-11-23 12:54:47.989  WARNING [neutron.api.extensions] Extension ipam not supported by any of loaded plugins
2014-11-23 12:54:47.991  WARNING [neutron.api.extensions] Extension policy not supported by any of loaded plugins
2014-11-23 12:54:47.994  WARNING [neutron.api.extensions] Extension route-table not supported by any of loaded plugins
2014-11-23 12:54:48.0  WARNING [neutron.api.extensions] Extension allowed-address-pairs not supported by any of loaded plugins
2014-11-23 12:54:48.4  WARNING [neutron.api.extensions] Extension dhcp_agent_scheduler not supported by any of loaded plugins
2014-11-23 12:54:48.8  WARNING [neutron.api.extensions] Extension extra_dhcp_opt not supported by any of loaded plugins
2014-11-23 12:54:48.10  WARNING [neutron.api.extensions] Extension extraroute not supported by any of loaded plugins
2014-11-23 12:54:48.21  WARNING [neutron.api.extensions] Extension fwaas not supported by any of loaded plugins
2014-11-23 12:54:48.25  WARNING [neutron.api.extensions] Extension flavor not supported by any of loaded plugins
2014-11-23 12:54:48.31  WARNING [neutron.api.extensions] Extension ext-gw-mode not supported by any of loaded plugins
2014-11-23 12:54:48.35  WARNING [neutron.api.extensions] Extension l3_agent_scheduler not supported by any of loaded plugins
2014-11-23 12:54:48.42  WARNING [neutron.api.extensions] Extension lbaas_agent_scheduler not supported by any of loaded plugins
2014-11-23 12:54:48.52  WARNING [neutron.api.extensions] Extension lbaas not supported by any of loaded plugins
2014-11-23 12:54:48.56  WARNING [neutron.api.extensions] Extension metering not supported by any of loaded plugins
2014-11-23 12:54:48.59  WARNING [neutron.api.extensions] Extension multi-provider not supported by any of loaded plugins
2014-11-23 12:54:48.65  WARNING [neutron.api.extensions] Extension provider not supported by any of loaded plugins
2014-11-23 12:54:48.69  WARNING [neutron.api.extensions] Extension routed-service-insertion not supported by any of loaded plugins
2014-11-23 12:54:48.71  WARNING [neutron.api.extensions] Extension router-service-type not supported by any of loaded plugins
2014-11-23 12:54:48.79  WARNING [neutron.api.extensions] Extension service-type not supported by any of loaded plugins
2014-11-23 12:54:48.85  WARNING [neutron.api.extensions] Extension vpnaas not supported by any of loaded plugins
2014-11-23 12:54:48.163  WARNING [keystoneclient.middleware.auth_token] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint
2014-11-23 12:54:48.164  WARNING [keystoneclient.middleware.auth_token] signing_dir is not owned by 0










AttributeError: 'module' object has no attribute 'VIF_TYPE_VROUTER'

Fix:
===
a)
Open contrail_plugin.py
#vim neutron/plugins/opencontrail/contrail_plugin.py



b)
Add following statements at class level in class NeutronPluginContrailCoreV2
#patch VIF_TYPES
portbindings.__dict__['VIF_TYPE_VROUTER'] = 'vrouter'
portbindings.VIF_TYPES.append(portbindings.VIF_TYPE_VROUTER)

Example:
-----------------  
class  NeutronPluginContrailCoreV2(neutron_plugin_base_v2.NeutronPluginBaseV2,
                                      securitygroup.SecurityGroupPluginBase,
                                      portbindings_base.PortBindingBaseMixin,
                                      external_net.External_net):
        # patch VIF_TYPES
        portbindings.__dict__['VIF_TYPE_VROUTER'] = 'vrouter'
        portbindings.VIF_TYPES.append(portbindings.VIF_TYPE_VROUTER)


ERROR:
======
#sudo python neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/opencontrail/ContrailPlugin.ini

/usr/lib/python2.7/dist-packages/eventlet/hubs/__init__.py:8: UserWarning: Module neutron was already imported from /home/saju/neutron/neutron/__init__.pyc, but /usr/lib/python2.7/dist-packages is being added to sys.path
  import pkg_resources
loc:===> /home/saju/neutron/neutron/__init__.pyc
2014-11-23 11:02:30.650    ERROR [neutron.service] Unrecoverable error: please check log for details.
Traceback (most recent call last):
  File "/home/saju/neutron/neutron/service.py", line 105, in serve_wsgi
    service.start()
  File "/home/saju/neutron/neutron/service.py", line 74, in start
    self.wsgi_app = _run_wsgi(self.app_name)
  File "/home/saju/neutron/neutron/service.py", line 173, in _run_wsgi
    app = config.load_paste_app(app_name)
  File "/home/saju/neutron/neutron/common/config.py", line 170, in load_paste_app
    app = deploy.loadapp("config:%s" % config_path, name=app_name)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in loadapp
    return loadobj(APP, uri, name=name, **kw)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in loadobj
    return context.create()
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
    return self.object_type.invoke(self)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
    **context.local_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 56, in fix_call
    val = callable(*args, **kw)
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 25, in urlmap_factory
    app = loader.get_app(app_name, global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in get_app
    name=name, global_conf=global_conf).create()
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
    return self.object_type.invoke(self)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
    **context.local_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 56, in fix_call
    val = callable(*args, **kw)
  File "/home/saju/neutron/neutron/auth.py", line 69, in pipeline_factory
    app = loader.get_app(pipeline[-1])
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in get_app
    name=name, global_conf=global_conf).create()
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
    return self.object_type.invoke(self)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 146, in invoke
    return fix_call(context.object, context.global_conf, **context.local_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 56, in fix_call
    val = callable(*args, **kw)
  File "/home/saju/neutron/neutron/api/v2/router.py", line 71, in factory
    return cls(**local_config)
  File "/home/saju/neutron/neutron/api/v2/router.py", line 75, in __init__
    plugin = manager.NeutronManager.get_plugin()
  File "/home/saju/neutron/neutron/manager.py", line 222, in get_plugin
    return weakref.proxy(cls.get_instance().plugin)
  File "/home/saju/neutron/neutron/manager.py", line 216, in get_instance
    cls._create_instance()
  File "/home/saju/neutron/neutron/openstack/common/lockutils.py", line 249, in inner
    return f(*args, **kwargs)
  File "/home/saju/neutron/neutron/manager.py", line 202, in _create_instance
    cls._instance = cls()
  File "/home/saju/neutron/neutron/manager.py", line 114, in __init__
    plugin_provider)
  File "/home/saju/neutron/neutron/manager.py", line 142, in _get_plugin_instance
    return plugin_class()
  File "/home/saju/neutron/neutron/plugins/opencontrail/contrail_plugin.py", line 72, in __init__
    self.base_binding_dict = self._get_base_binding_dict()
  File "/home/saju/neutron/neutron/plugins/opencontrail/contrail_plugin.py", line 78, in _get_base_binding_dict
    portbindings.VIF_TYPE: portbindings.VIF_TYPE_VROUTER,
AttributeError: 'module' object has no attribute 'VIF_TYPE_VROUTER'
2014-11-23 11:02:30.657 CRITICAL [neutron] 'module' object has no attribute 'VIF_TYPE_VROUTER'

Sunday, November 23, 2014

OpenStack Development Using DevStack

1)
List all screen sessions of current user
#screen -ls

2)
Howto Attach to a not detached screen session.
#screen -x



3)
Reattach to a screen session
#screen -r session_id_or_name

4)
To see all commands or parameters on screen.
Ctrl + a, Then Press ?

5)
Detach/Exit from a screen session
Ctrl + a, Then Press d

6)
Kill a screen session
Ctrl + a, Then Press Shift + k

7)
How to move to next screen window/tab in a screen session
Ctrl + a, Then Press n

8)
How to move to previous screen window/tab in a screen session
Ctrl + a, Then Press p

9)
How to list name of all screen windows/tabs and select from there
Ctrl + a, Then Press Shift + '

10)
How to Toggle  to  the  window/tab  displayed  previously
Ctrl + a, Then Ctrl + a again

11)
How to copy logs from screen
* To start the copy mode, use Ctrl + A, Then Press [
* To start copying, press Enter
* Select the text using arrow keys
* Again press Enter to stop selction and copy to clipboard
* To exit the copy mode, use Ctrl + A, Then Press ] or Ctrl + C

12)
How to restart a service
After doing any changes in the code, if you wish to restart a service,
* Goto the window/tab of the service, then use Ctrl + C to stop that service.
* To start the service, use the UP arrow key to move to the command which used to start the service and press Enter.

* This way, we can't restart horizon service, you need to restart apache web server
#service apache2 restart

13)
Debugging with pdb and pudb


How to use screen in DevStack OpenStack

1)
List all screen sessions of current user
#screen -ls



2)
Howto Attach to a not detached screen session.
#screen -x

3)
Reattach to a screen session
#screen -r session_id_or_name

4)
To see all commands or parameters on screen.
Ctrl + a, Then Press ?

5)
Detach/Exit from a screen session
Ctrl + a, Then Press d

6)
Kill a screen session
Ctrl + a, Then Press Shift + k

7)
How to move to next screen window/tab in a screen session
Ctrl + a, Then Press n

8)
How to move to previous screen window/tab in a screen session
Ctrl + a, Then Press p

9)
How to list name of all screen windows/tabs and select from there
Ctrl + a, Then Press Shift + '

10)
How to Toggle  to  the  window/tab  displayed  previously
Ctrl + a, Then Ctrl + a again

11)
How to copy logs from screen
* To start the copy mode, use Ctrl + A, Then Press [
* To start copying, press Enter
* Select the text using arrow keys
* Again press Enter to stop selction and copy to clipboard
* To exit the copy mode, use Ctrl + A, Then Press ] or Ctrl + C

12)
How to restart a service
After doing any changes in the code, if you wish to restart a service,
* Goto the window/tab of the service, then use Ctrl + C to stop that service.
* To start the service, use the UP arrow key to move to the command which used to start the service and press Enter.

* This way, we can't restart horizon service, you need to restart apache web server
#service apache2 restart


OpenStack OpenContrail Neutron IPAM API examples

1)
List all IPAMs

#neutron --debug ipam-list

a)

curl request generated by above CLI command
#curl -s http://127.0.0.1:9696/v2.0/ipams.json -X GET -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" | python -mjson.tool

#curl -s http://127.0.0.1:9696/v2.0/ipams -X GET -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" | python -mjson.tool


2)
Show IPAM

#neutron --debug ipam-show 58f9369b-d3ef-428a-bbc9-2b8c0e06b752

a)

curl request generated by above CLI command 
#curl -s http://127.0.0.1:9696/v2.0/ipams/58f9369b-d3ef-428a-bbc9-2b8c0e06b752.json -X GET -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" | python -mjson.tool

#curl -s http://127.0.0.1:9696/v2.0/ipams/58f9369b-d3ef-428a-bbc9-2b8c0e06b752 -X GET -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" | python -mjson.tool


3)
Delete IPAM

#neutron --debug ipam-delete d47aa080-cf94-46cc-9c07-da11819dc7ca


a)
curl request generated by above CLI command
#curl -s http://127.0.0.1:9696/v2.0/ipams/d47aa080-cf94-46cc-9c07-da11819dc7ca.json -X DELETE -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" | python -mjson.tool

#curl -s http://127.0.0.1:9696/v2.0/ipams/d47aa080-cf94-46cc-9c07-da11819dc7ca -X DELETE -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" | python -mjson.tool


4)
Create IPAM

#neutron --debug ipam-create ipam1

a) 
curl request generated by above CLI command
#curl -s http://127.0.0.1:9696/v2.0/ipams.json -X POST -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"ipam": {"name": "ipam1", "mgmt": {"method": "fixed"}}}'

#curl -s http://127.0.0.1:9696/v2.0/ipams -X POST -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"ipam": {"name": "ipam1", "mgmt": {"method": "fixed"}}}' | python -mjson.tool


b)

Create IPAM with "mgmt" databy making custom curl request
#curl -i http://127.0.0.1:9696/v2.0/ipams.json -X POST -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"ipam": {"name": "ipam28", "mgmt": {"ipam_method": null, "ipam_dns_method": "virtual-dns-server", "ipam_dns_server": {"tenant_dns_server_address": {"ip_address": []}, "virtual_dns_server_name": "default-domain:vdns"}, "dhcp_option_list": {"dhcp_option": [{"dhcp_option_value": "mydomain", "dhcp_option_name": "15"}, {"dhcp_option_value": "192.168.56.1", "dhcp_option_name": "4"}]}, "host_routes": null, "cidr_block": null}}}'

5)
Update IPAM


#neutron --debug ipam-update 07ae76d2-9fe4-466f-a98e-f82bc2b819d0 --name ipam5


a)

curl request generated by above CLI command
#curl -s http://127.0.0.1:9696/v2.0/ipams/07ae76d2-9fe4-466f-a98e-f82bc2b819d0.json -X PUT -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"ipam": {"name": "ipam5"}}' | python -mjson.tool

#curl -s http://127.0.0.1:9696/v2.0/ipams/07ae76d2-9fe4-466f-a98e-f82bc2b819d0 -X PUT -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"ipam": {"name": "ipam5"}}' | python -mjson.tool


b)
Update an IPAM's "mgmt" data by making custom curl request
#curl -i http://127.0.0.1:9696/v2.0/ipams/2e5810de-c681-4b0d-a346-99469f817b06.json -X PUT -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" -d '{"ipam": {"mgmt": {"ipam_method": null, "ipam_dns_method": "virtual-dns-server", "ipam_dns_server": {"tenant_dns_server_address": {"ip_address": []}, "virtual_dns_server_name": "default-domain:vdns"}, "dhcp_option_list": {"dhcp_option": [{"dhcp_option_value": "mydomain", "dhcp_option_name": "15"}, {"dhcp_option_value": "192.168.56.1", "dhcp_option_name": "4"}]}, "host_routes": null, "cidr_block": null}}}'

Saturday, November 22, 2014

OpenStack API Examples using curl

1)
*Export keystone credentials

export OS_USERNAME=admin
export OS_PASSWORD=secret123
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://127.0.0.1:35357/v2.0


Run CLI commands with "--debug" option and copy the curl command from the output and execute it without using python client.





2)
Get Token


a)
*CLI command
#keystone --debug token-get

b)

*curl command
#curl -i -X POST http://127.0.0.1:35357/v2.0/tokens -H "Content-Type: application/json" -H "User-Agent: python-keystoneclient" -d '{"auth": {"tenantName": "demo", "passwordCredentials": {"username": "admin", "password": "secret123"}}}'

3)
Save token into a shell script variable like

TOKEN=blablabla

4)
List all images


a)
CLI command
#glance --debug image-list

b)
*curl command
*Note: Use double quotes for -H "X-Auth-Token:$TOKEN"

#curl -i -X GET -H "X-Auth-Token:$TOKEN" -H 'Content-Type: application/json' -H 'User-Agent: python-glanceclient' http://127.0.0.1:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20

*Replace the option "-i" with "-s" and filter with "python -mjson.tool" to get readable formatted json output
#curl -s -X GET -H "X-Auth-Token:$TOKEN" -H 'Content-Type: application/json' -H 'User-Agent: python-glanceclient' http://127.0.0.1:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=20 | python -mjson.tool

5)
List all Virtual Machines


a)
CLI command
#nova --debug list

b)
*curl command
#curl -i 'http://127.0.0.1:8774/v1.1/02ef892087a640bcb66bd42a3ceccc79/servers/detail' -X GET -H "X-Auth-Project-Id: demo" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token:$TOKEN"

*Replace the option "-i" with "-s" and filter with "python -mjson.tool" to get readable formatted json output
#curl -s 'http://127.0.0.1:8774/v1.1/02ef892087a640bcb66bd42a3ceccc79/servers/detail' -X GET -H "X-Auth-Project-Id: demo" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token:$TOKEN" | python -mjson.tool

6)
List all networks


a)
CLI command
#neutron --debug net-list

b)

*curl command
#curl -i http://127.0.0.1:9696/v2.0/networks.json -X GET -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient"

*Replace the option "-i" with "-s" and filter with "python -mjson.tool" to get readable formatted json output
#curl -s http://127.0.0.1:9696/v2.0/networks.json -X GET -H "X-Auth-Token:$TOKEN" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-neutronclient" | python -mjson.tool

7)
List all volumes


a)
CLI command
#cinder --debug list

b)
*curl command
#curl -i http://127.0.0.1:8776/v1/02ef892087a640bcb66bd42a3ceccc79/volumes/detail -X GET -H "X-Auth-Project-Id: demo" -H "User-Agent: python-cinderclient" -H "Accept: application/json" -H "X-Auth-Token:$TOKEN"

*Replace the option "-i" with "-s" and filter with "python -mjson.tool" to get readable formatted json output
#curl -s http://127.0.0.1:8776/v1/02ef892087a640bcb66bd42a3ceccc79/volumes/detail -X GET -H "X-Auth-Project-Id: demo" -H "User-Agent: python-cinderclient" -H "Accept: application/json" -H "X-Auth-Token:$TOKEN" | python -mjson.tool



Linux How to use man page to find information

1)
Search in all man pages
#man -k "search-string"

2)
Find the location of man page of a command
#man -wa {command}
#whereis {command}

Example:
#man -wa curl
/usr/share/man/man1/curl.1.gz
 

#whereis curl
curl: /usr/bin/curl /usr/bin/X11/curl /usr/share/man/man1/curl.1.gz


3)
Open a man page
#man {command}

#man {path-to-man-page}

Example:
#man -wa curl
/usr/share/man/man1/curl.1.gz
#

#man /usr/share/man/man1/curl.1.gz
#

Friday, November 21, 2014

Wednesday, November 19, 2014

How To git change url of remote origin and pull without conflict

1)
Set new URL for remote 'origin'

#git remote set-url origin https://github.com/myrepo/MYPRO.git

2)
Fetch data from new remote 'origin'

#git fetch origin

3)
Pull and rebase changes from new remote 'origin'

#git pull --rebase origin
https://www.atlassian.com/git/tutorials/syncing/git-pull

4)
Pull changes from branch 'migration_django_15' in the new remote 'origin'

#git pull origin migration_django_15

Tuesday, November 18, 2014

How To git checkout branches from different remotes

1)
Add your remote

#git remote add github-remote https://github.com/myrepo/MYPRO.git

2)
Fetch remote

#git fetch github-remote

3)
Checkout the branch "migration_django_15" from the remote "github-remote" and rename to "migration_django_15_github"

#git checkout -b migration_django_15_github --track github-remote/migration_django_15

4)
Now in local machine, I am in "migration_django_15_github" branch.
Pull the latest changes from "migration_django_15_github" branch in the remote https://github.com/myrepo/MYPRO.git

#git pull





Monday, November 17, 2014

Git remove file or folder from all commits in all branches and tags

1)
Goto git repo

2)
Remove a file or folder  from all commits in all branches and tags
#git filter-branch -f --index-filter "git rm -rf --cached --ignore-unmatch FOLDERNAME-OR-FILENAME" -- --all

* Replace FOLDERNAME-OR-FILENAME with file or folder you want to remove

Howto Install Contrail with OpenStack In Ubuntu Single Node setup

Howto Install Contrail Release 1.20 with OpenStack In Ubuntu 12.04.4 LTS Single Node setup

http://www.juniper.net/techpubs/en_US/contrail1.0/information-products/topic-collections/release-notes/index.html?jd0e331.html

http://www.opencontrail.org/opencontrail-quick-start-guide/

1)
Create a VirtualBox VM with 4GB RAM, 20GB Disk and Ubuntu 12.04.4 LTS
http://releases.ubuntu.com/12.04/ubuntu-12.04.4-server-amd64.iso



2)
Download contrail install packages
http://www.juniper.net/support/downloads/?p=contrail#sw

* Download "Contrail Package for Ubuntu 12.04.4 LTS"

3)
Install the Contrail packages
a)
Login as root user
#su -

b)
Install Packages
#dpkg -i contrail-install-packages_1.20-63~icehouse_all.deb

4)
Goto /opt/contrail/
There you can see two directories "contrail_packages" and  "puppet".
Goto /opt/contrail/contrail_packages and run "setup.sh"
Run the setup.sh script. This step will create the Contrail packages repository as well as the Fabric utilities needed for provisioning
#cd /opt/contrail/contrail_packages
#./setup.sh

5)
Populate the testbed definitions file, see http://www.juniper.net/techpubs/en_US/contrail1.0/topics/task/installation/testbed-file-vnc.html
a)
Create /opt/contrail/utils/fabfile/testbeds/testbed.py
#cp /opt/contrail/utils/fabfile/testbeds/testbed_singlebox_example.py /opt/contrail/utils/fabfile/testbeds/testbed.py

b)
Edit testbed.py

* Replace “root@1.1.1.1” by “root@server ip”
* Replace “secret” by "root password"
* Replace “secret123” by "Openstack admin password"

#vim /opt/contrail/utils/fabfile/testbeds/testbed.py

host1 = 'root@127.0.0.1'

ext_routers = []

router_asn = 64512

host_build = 'root@127.0.0.1'

env.roledefs = {
     'all': [host1],
     ‘database’: [host1],
     'cfgm': [host1],
     'control': [host1],
     'compute': [host1],
     'collector': [host1],
     'webui': [host1],
     'build': [host_build],
}

#Openstack admin password
env.openstack_admin_password = 'secret123'

#Hostnames
env.hostnames = {
    'all': ['contrailsys']
}

env.password = 'rootpass'

#Passwords of each host
env.passwords = {
    host1: 'rootpass',
    host_build: 'rootpass',
}

#For reimage purpose
env.ostypes = {
    host1:'ubuntu',
}

6)
Install Contrail with OpenStack
Doc: /opt/contrail/utils/README.fabric

a)
Install contrail
#cd /opt/contrail/utils/
#fab -c fabrc install_contrail

b)
Install OpenStack and Configure
#fab setup_all
* Your system will reboot

7)
Open Horizon Dashboard
http://ip-of-contrail-vm/horizon/ #IP of your VirtualBox VM
username: admin
password: secret123

8)
Open Contrail Webui
https://ip-of-contrail-vm:8080
https://ip-of-contrail-vm:8143/login
username: admin
password: secret123

9)
Do following steps if you are trying to setup in a Virtual Machine
a)
#sudo vim /etc/nova/nova-compute.conf

[DEFAULT]
libvirt_type=qemu
compute_driver=libvirt.LibvirtDriver


b)
#sudo service nova-compute restart

http://fosshelp.blogspot.com/2014/11/openstack-libvirterror-internal-error.html

c)
Create a VM vis horizon dashboard http://ip-of-contrail-vm/horizon

d)
Open the log file and check for error
#tail -f /var/log/nova/nova-scheduler.log 

10)
#sudo netstat -tuplen | grep cont
tcp        0      0 0.0.0.0:8091            0.0.0.0:*               LISTEN      0          13132       2389/contrail-query
tcp        0      0 0.0.0.0:34302           0.0.0.0:*               LISTEN      0          23312       2381/contrail-vrout
tcp        0      0 0.0.0.0:9090            0.0.0.0:*               LISTEN      0          23358       2381/contrail-vrout
tcp        0      0 0.0.0.0:8083            0.0.0.0:*               LISTEN      0          12666       2382/contrail-contr
tcp        0      0 0.0.0.0:179             0.0.0.0:*               LISTEN      0          12589       2382/contrail-contr
tcp        0      0 0.0.0.0:8085            0.0.0.0:*               LISTEN      0          23360       2381/contrail-vrout
tcp        0      0 0.0.0.0:5269            0.0.0.0:*               LISTEN      0          12610       2382/contrail-contr
tcp        0      0 0.0.0.0:8086            0.0.0.0:*               LISTEN      0          13219       2386/contrail-colle
tcp        0      0 0.0.0.0:8089            0.0.0.0:*               LISTEN      0          13270       2386/contrail-colle
udp        0      0 0.0.0.0:56703           0.0.0.0:*                           0          23342       2381/contrail-vrout
udp        0      0 0.0.0.0:32957           0.0.0.0:*                           0          23343       2381/contrail-vrout

11)
Introspect

Please note the Ports

Modules for ControlNode
http://ip-of-contrail-node:8083

Modules for ApiServer
http://ip-of-contrail-node:8084

Modules for VRouterAgent
http://ip-of-contrail-node:8085

Modules for Schema
http://ip-of-contrail-node:8087

Modules for ServiceMonitor
http://ip-of-contrail-node:8088

Modules for Collector
http://ip-of-contrail-node:8089

Modules for OpServer
http://ip-of-contrail-node:8090

Modules for QueryEngine
http://ip-of-contrail-node:8091

Modules for DnsAgent
http://ip-of-contrail-node:8092

Thursday, November 13, 2014

OpenStack "libvirtError: internal error no supported architecture for os type 'hvm'

Fix
====
a)
#sudo vim /etc/nova/nova-compute.conf
[DEFAULT]
libvirt_type=qemu
compute_driver=libvirt.LibvirtDriver


b)
#sudo service nova-compute restart

Error
======
2014-11-13 16:46:35.880 1943 INFO nova.scheduler.filter_scheduler [req-d3d831b7-d5bd-43d8-9a95-39914b90fe3b f62fed25125b40c59b290e4b9f820ccb ead267fc9a9e451ba390143c62d1f00d] Attempting to build 1 instance(s) uuids: [u'c792d53c-e057-4d43-b088-b253165fb603']
2014-11-13 16:46:35.913 1943 INFO nova.scheduler.filter_scheduler [req-d3d831b7-d5bd-43d8-9a95-39914b90fe3b f62fed25125b40c59b290e4b9f820ccb ead267fc9a9e451ba390143c62d1f00d] Choosing host WeighedHost [host: contrailsys, weight: 4214.0] for instance c792d53c-e057-4d43-b088-b253165fb603
2014-11-13 16:46:36.193 1943 INFO nova.openstack.common.rpc.common [req-d3d831b7-d5bd-43d8-9a95-39914b90fe3b f62fed25125b40c59b290e4b9f820ccb ead267fc9a9e451ba390143c62d1f00d] Connected to AMQP server on 127.0.0.1:5672
2014-11-13 16:46:45.890 1943 INFO nova.scheduler.filter_scheduler [req-d3d831b7-d5bd-43d8-9a95-39914b90fe3b f62fed25125b40c59b290e4b9f820ccb ead267fc9a9e451ba390143c62d1f00d] Attempting to build 1 instance(s) uuids: [u'c792d53c-e057-4d43-b088-b253165fb603']
2014-11-13 16:46:45.894 1943 ERROR nova.scheduler.filter_scheduler [req-d3d831b7-d5bd-43d8-9a95-39914b90fe3b f62fed25125b40c59b290e4b9f820ccb ead267fc9a9e451ba390143c62d1f00d] [instance: c792d53c-e057-4d43-b088-b253165fb603] Error from last host: contrailsys (node contrailsys): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1040, in _build_instance\n    set_access_ip=set_access_ip)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1413, in _spawn\n    LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1410, in _spawn\n    block_device_info)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2071, in spawn\n    block_device_info, context=context)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3225, in _create_domain_and_network\n    domain = self._create_domain(xml, instance=instance, power_on=power_on)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3159, in _create_domain\n    raise e\n', u"libvirtError: internal error no supported architecture for os type 'hvm'\n"]
2014-11-13 16:46:45.940 1943 WARNING nova.scheduler.driver [req-d3d831b7-d5bd-43d8-9a95-39914b90fe3b f62fed25125b40c59b290e4b9f820ccb ead267fc9a9e451ba390143c62d1f00d] [instance: c792d53c-e057-4d43-b088-b253165fb603] Setting instance to ERROR state.


Wednesday, November 12, 2014

OpenStack how haproxy redirect CLI/API request contrail api server

OpenStack how haproxy redirect CLI/API request to neutron server

1)
Neutron CLI will send request to haproxy running in port "9696"


You can find the haproxy setting in /etc/nova/nova.conf

#sudo vim /etc/nova/nova.conf
quantum_url = http://localhost:9696/
neutron_url = http://127.0.0.1:9696/



2)
Find ID of process which running on port 9696


#sudo netstat -tuplen | grep 9696
tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      0          10659       1800/haproxy

Note:
------
Process ID : 1800/haproxy

3)
Find the process by Process ID 1800


#ps -aux | grep 1800
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
haproxy   1800  0.2  0.0  21568  2148 ?        Ss   13:12   0:37 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid

Note:
------
Location of haproxy binary: /usr/sbin/haproxy
haproxy config file: /etc/haproxy/haproxy.cfg

4)
Open haproxy config file /etc/haproxy/haproxy.cfg


#sudo vim /etc/haproxy/haproxy.cfg

#contrail-config-marker-start
listen contrail-config-stats :5937
   mode http
   stats enable
   stats uri /
   stats auth haproxy:contrail123

frontend quantum-server *:9696
    default_backend    quantum-server-backend


frontend  contrail-api *:8082
    default_backend    contrail-api-backend


frontend  contrail-discovery *:5998
    default_backend    contrail-discovery-backend

backend quantum-server-backend
    option nolinger
    balance     roundrobin
    server 127.0.0.1 127.0.0.1:9697 check inter 2000 rise 2 fall 3


    #server  10.84.14.2 10.84.14.2:9697 check

backend contrail-api-backend
    option nolinger
    balance     roundrobin
    server 127.0.0.1 127.0.0.1:9100 check inter 2000 rise 2 fall 3


    #server  10.84.14.2 10.84.14.2:9100 check
    #server  10.84.14.2 10.84.14.2:9101 check

backend contrail-discovery-backend
    option nolinger
    balance     roundrobin
    server 127.0.0.1 127.0.0.1:9110 check inter 2000 rise 2 fall 3

Note:
--------

Please note the Settings of quantum/neutron and contrail

4,a)

frontend quantum-server *:9696
    default_backend    quantum-server-backend

backend quantum-server-backend
    option nolinger
    balance     roundrobin
    server 127.0.0.1 127.0.0.1:9697 check inter 2000 rise 2 fall 3


* Means, haproxy will redirect all trafic flowing to port 9696 to 9697 (where neutron-server is running)

4,a1)
Find ID of process which running on port 9696

#sudo netstat -tuplen | grep 9696
tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      0          10659       1800/haproxy

4,a2)
Find name of the process which has ID:1800, it is haproxy.


#ps -aux | grep 1800
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
haproxy   1800  0.2  0.0  21568  2136 ?        Ss   13:12   0:42 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid

4,a3)
Find ID of process which running on port 9697


#sudo netstat -tuplen | grep 9697
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      120        428165      323/python

4,a4)
Find name of the process which has ID:323, it is neutron-server.


#ps -aux | grep 323
neutron    323  0.1  0.9 113236 46356 ?        Ss   16:56   0:02 /usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --log-file /var/log/neutron/server.log --config-file /etc/neutron/plugins/opencontrail/ContrailPlugin.ini


4,b)

frontend  contrail-api *:8082
    default_backend    contrail-api-backend

backend contrail-api-backend
    option nolinger
    balance     roundrobin
    server 127.0.0.1 127.0.0.1:9100 check inter 2000 rise 2 fall 3


* Means, haproxy will redirect all trafic flowing to port 8082 to 9100 (where contrail-api is running)

4,b1)
Find ID of process which running on port 8082


#sudo netstat -tuplen | grep 8082
tcp        0      0 0.0.0.0:8082            0.0.0.0:*               LISTEN      0          10660       1800/haproxy

4,b2)
Find name of the process which has ID:1800, it is haproxy.


#ps -aux | grep 1800
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
haproxy   1800  0.2  0.0  21568  2136 ?        Ss   13:12   0:42 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid

4,b3)
Find ID of process which running on port 9100


#sudo netstat -tuplen | grep 9100
tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      0          31764       1918/python

4,b4)
Find name of the process which has ID:1918, it is contrail-api.


#ps -aux | grep 1918

root      1918  0.6  1.1 324340 56180 ?        Sl   13:13   1:43 /usr/bin/python /usr/bin/contrail-api --conf_file /etc/contrail/contrail-api.conf --listen_port 9100 --worker_id 0

How to Debug OpenStack Neutron and Contrail APIs

1)
Export the credentials

export OS_USERNAME=admin
export OS_PASSWORD=secret123
export OS_TENANT_NAME=myproject1
export OS_AUTH_URL=http://192.168.56.101:35357/v2.0


2)
Verify the credentials

#keystone token-get

3)
List all Virtual Networks

a)
Using Contrail API

http://192.168.56.101:9100/virtual-networks

b)
Using CLI via Neutron

#neutron --help
#neutron net-list

OpenContrail extract packages from contrail-install-packages

1)
Download contrail install packages

http://www.juniper.net/support/downloads/?p=contrail#sw
* Download "Contrail Package for Ubuntu 12.04.4 LTS + Havana"

2)
Login as root user

#su -

3)
Install Packages

#dpkg -i contrail-install-packages_1.10-34~havana_all.deb

4)
Extract all packages

#/opt/contrail/contrail_packages
#sudo tar -xzf contrail_debs.tgz


5)
Search

#ls | grep contrail

Tuesday, November 11, 2014

How to OpenStack glance upload new image via CLI

How to upload new image

1)
Export the credentials

export OS_USERNAME=admin
export OS_PASSWORD=secret123
export OS_TENANT_NAME=myproject1
export OS_AUTH_URL=http://192.168.56.101:35357/v2.0


2)
Verify the credentials

#keystone token-get

3)
List all images

#glance image-list

4)
Upload an image (cirros)

#glance image-create --progress --name cirros-0.3.2 --disk-format qcow2 --container-format bare --is-public True --file cirros-0.3.2-x86_64-disk.img

5)
List all images

#glance image-list