Custom Search

Monday, July 27, 2015

OpenStack ZeroMQ (ZMQ) receiver listening port 9501

zmq-receiver receives messages on tcp://*:9501 with socket type PULL and
based on topic-name (which is extracted from received data), it forwards
data to respective local services, over IPC protocol.

matchmaker_ringfile   /etc/nova/matchmaker_ring.json   Matchmaker ring file (JSON)
rpc_zmq_bind_address  '*'                              ZeroMQ bind address
rpc_zmq_matchmaker    ceilometer.openstack.common.rpc. MatchMaker drivers
                      matchmaker.MatchMakerLocalhost
rpc_zmq_port          9501                             ZeroMQ receiver listening port
rpc_zmq_port_pub      9502                             ZeroMQ fanout publisher port
rpc_zmq_contexts      1                                Number of ZeroMQ contexts
rpc_zmq_ipc_dir       /var/run/openstack               Directory for holding IPC sockets

Linux tmux examples

#tmux ls

#tmux new -s new_session_name

#Ctrl + b, d ---- Exit from a tmux session

#tmux a -t session_name ---- Rejoin to an existing tmux session

#Ctrl + b, c --- Create new tab

#Ctrl + b, , --- Rename tab

https://danielmiessler.com/study/tmux/

http://www.dayid.org/comp/tm.html

https://gist.github.com/henrik/1967800


OpenStack OpenContrail Compute Node Debian Packages

upgrade@controlconfiganalytics:~$
upgrade@controlconfiganalytics:~$ dpkg -l | grep contrail
ii  contrail-fabric-utils               2.10-39                                    Contrail Fabric Utilities for cluster management
ii  contrail-install-packages           2.10-39~icehouse                           Contrail Installer Packages - Container of debian packages
ii  contrail-lib                        2.10-39                                    OpenContrail libraries
ii  contrail-nodemgr                    2.10-39                                    Contrail Config API Library package
ii  contrail-nova-vif                   2.10-39                                    OpenContrail interface driver for nova-compute
ii  contrail-openstack-vrouter          2.10-39                                    Contrail Openstack vRouter composite debian package
ii  contrail-setup                      2.10-39                                    Contrail Setup package with scripts for provisioning
ii  contrail-utils                      2.10-39                                    OpenContrail tools and utilities
ii  contrail-vrouter-agent              2.10-39                                    OpenContrail vrouter agent
ii  contrail-vrouter-common             2.10-39                                    Contrail vRouter composite debian package
ii  contrail-vrouter-dkms               2.10-39                                    OpenContrail VRouter - DKMS version
ii  contrail-vrouter-init               2.10-39                                    OpenContrail compute-node startup and monitoring scripts.
ii  contrail-vrouter-utils              2.10-39                                    OpenContrail VRouter - Utilities
ii  python-backports.ssl-match-hostname 3.4.0.2-1contrail1                         The ssl.match_hostname() function from Python 3.4
ii  python-bitarray                     0.8.0-2contrail1                           Python module for efficient boolean array handling
ii  python-certifi                      1.0.1-1contrail1                           Python SSL Certificates
ii  python-contrail                     2.10-39                                    OpenContrail python-libs
ii  python-contrail-vrouter-api         2.10-39                                    OpenContrail vrouter agent api
ii  python-geventhttpclient             1.1.0-1contrail1                           http client library for gevent
ii  python-lxml                         3.3.1-1contrail1                           pythonic binding for the libxml2 and libxslt libraries
ii  python-opencontrail-vrouter-netns   2.10-39                                    OpenContrail vrouter network namespace package
ii  python-pycassa                      1.11.0-1contrail2                          Client library for Apache Cassandra
ii  python-stevedore                    0.14.1-1contrail4                          manage dynamic plugins for Python applications - python2
upgrade@controlconfiganalytics:~$
upgrade@controlconfiganalytics:~$

Sunday, July 26, 2015

Python ZeroMQ Publish Subscribe Pattern Example

ZeroMQ Publish Subscribe Pattern Python Example

1)
Install pyzmq

pip install pyzmq

2)
Publisher
==========

import zmq
context = zmq.Context()
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:1234')
publisher.send_multipart(['a', 'b'])
publisher.send_multipart(['c', 'd'])

3)
Subscriber
==========

import zmq
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.setsockopt(zmq.SUBSCRIBE, '')
subscriber.connect('tcp://127.0.0.1:1234')
subscriber.recv_multipart()

ZeroMQ Publish Subscribe Pattern Example

ZeroMQ Publish Subscribe Pattern Example

1)
Install pyzmq
$ pip install pyzmq

2)
Publisher
==========
import zmq
context = zmq.Context()
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:1234')
publisher.send_multipart(['a', 'b'])
publisher.send_multipart(['c', 'd'])

3)
Subscriber
==========
import zmq
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.setsockopt(zmq.SUBSCRIBE, '')
subscriber.connect('tcp://127.0.0.1:1234')
subscriber.recv_multipart()

OpenStack Controller zeromq sending truncated message to compute node on compute restart


* OpenStack Controller trying to send huge data (details of all VMs in the node) to a compute node on restart of that compute node.

* But message got truncated and compute node not getting complete message from controller.

* Issue may be the TCP Buffer size of OpenStack control node.

* Try to increase TCP Buffer size in OpenStack control node.

http://fosshelp.blogspot.in/2015/07/how-to-ubuntu-linux-increase-tcp-buffer.html


How to Ubuntu Linux Increase TCP buffer size

echo 12582912 /proc/sys/net/core/rmem_default
echo 12582912 /proc/sys/net/core/wmem_default
echo 25165824 > /proc/sys/net/core/rmem_max
echo 25165824 > /proc/sys/net/core/wmem_max
echo '10240 12582912 25165824' > /proc/sys/net/ipv4/tcp_rmem
echo '10240 12582912 25165824' > /proc/sys/net/ipv4/tcp_wmem
echo 1 > /proc/sys/net/ipv4/route/flush

OpenStack zeromq send_multipart code trace from oslo.messaging._drivers.impl_zmq to eventlet.green.zmq to pyzmq

OpenStack zeromq send_multipart code trace from oslo.messaging._drivers.impl_zmq to eventlet.green.zmq to pyzmq

1)

https://github.com/JioCloud/oslo.messaging/blob/stable/icehouse/oslo/messaging/_drivers/impl_zmq.py#L216

zmq = importutils.try_import('eventlet.green.zmq') <=====1

class ZmqSocket(object):

    """A tiny wrapper around ZeroMQ.

    Simplifies the send/recv protocol and connection management.
    Can be used as a Context (supports the 'with' statement).
    """
    def __init__(self, addr, zmq_type, bind=True, subscribe=None):
        self.sock = _get_ctxt().socket(zmq_type) <=====4

    def send(self, data, **kwargs):
        if not self.can_send:
            raise RPCException(_("You cannot send on this socket."))
        self.sock.send_multipart(data, **kwargs) <====8

def _get_ctxt():

    if not zmq:
        raise ImportError("Failed to import eventlet.green.zmq")
    global ZMQ_CTX
    if not ZMQ_CTX:
        ZMQ_CTX = zmq.Context(CONF.rpc_zmq_contexts) <=====2
    return ZMQ_CTX

2)
https://github.com/eventlet/eventlet/blob/master/eventlet/green/zmq.py#L302

__zmq__ = __import__('zmq')
_Socket = __zmq__.Socket
_Socket_send_multipart = _Socket.send_multipart<=====10

class Context(__zmq__.Context): <=====3
    """Subclass of :class:`zmq.core.context.Context`
    """
    def socket(self, socket_type): <=====5
        """Overridden method to ensure that the green version of socket is used

        Behaves the same as :meth:`zmq.core.context.Context.socket`, but ensures
        that a :class:`Socket` with all of its send and recv methods set to be
        non-blocking is returned
        """
        if self.closed:
            raise ZMQError(ENOTSUP)
        return Socket(self, socket_type) <=====6

class Socket(_Socket): <=====7
    @_wraps(_Socket.send_multipart)<=====9
    def send_multipart(self, msg_parts, flags=0, copy=True, track=False): <=====12
        """A send_multipart method that's safe to use when multiple
        greenthreads are calling send, send_multipart, recv and
        recv_multipart on the same socket.
        """
        if flags & NOBLOCK:
            return _Socket_send_multipart(self, msg_parts, flags, copy, track) <=====13/or

        # acquire lock here so the subsequent calls to send for the
        # message parts after the first don't block
        with self._eventlet_send_lock:
            return _Socket_send_multipart(self, msg_parts, flags, copy, track) <=====13/or

def _wraps(source_fn): <=====11
    """A decorator that copies the __name__ and __doc__ from the given
    function
    """
    def wrapper(dest_fn):
        dest_fn.__name__ = source_fn.__name__
        dest_fn.__doc__ = source_fn.__doc__
        return dest_fn
    return wrapper

3)
https://github.com/zeromq/pyzmq/blob/4b8c8a680bb86fc6e0a82447abd7821f046a026a/zmq/sugar/socket.py#L47

import zmq
from zmq.backend import Socket as SocketBase <======17

class Socket(SocketBase, AttributeSetter):
    def send_multipart(self, msg_parts, flags=0, copy=True, track=False): <=====14
        for i,msg in enumerate(msg_parts):
            if isinstance(msg, (zmq.Frame, bytes, _buffer_type)):
                continue
            try:
                _buffer_type(msg)
            except Exception as e:
                rmsg = repr(msg)
                if len(rmsg) > 32:
                    rmsg = rmsg[:32] + '...'
                raise TypeError(
                    "Frame %i (%s) does not support the buffer interface." % (
                    i, rmsg,
                ))
        for msg in msg_parts[:-1]: <======15
            self.send(msg, SNDMORE|flags, copy=copy, track=track) <======16
        # Send the last part without the extra SNDMORE flag.
        return self.send(msg_parts[-1], flags, copy=copy, track=track)   

__all__ = ['Socket'] <=====

4)
https://github.com/zeromq/pyzmq/blob/master/zmq/backend/__init__.py

from .select import public_api, select_backend

backend = os.environ['PYZMQ_BACKEND'] <====a (Find Backend 'cython' or 'cffi')
if backend in ('cython', 'cffi'):
    backend = 'zmq.backend.%s' % backend

_ns = select_backend(backend) <====b (Load backend api modules)
globals().update(_ns)

__all__ = public_api

5)
https://github.com/zeromq/pyzmq/blob/master/zmq/backend/select.py

public_api = [  <====c (Backend api modules list)
    'Context',
    'Socket',
    ... ...
    ]
   
def select_backend(name):
    mod = __import__(name, fromlist=public_api) <====d (Load backend api modules)

6)

Here, selected backend is "cython"
https://github.com/zeromq/pyzmq/tree/master/zmq/backend/cython

7)
https://github.com/zeromq/pyzmq/blob/master/zmq/backend/cython/socket.pyx#L619

cdef class Socket: <======18
    cpdef object send(self, object data, int flags=0, copy=True, track=False): <======19
        _check_closed(self)
       
        if isinstance(data, unicode):
            raise TypeError("unicode not allowed, use send_string")
       
        if copy:
            # msg.bytes never returns the input data object
            # it is always a copy, but always the same copy
            if isinstance(data, Frame):
                data = data.buffer
            return _send_copy(self.handle, data, flags) <======20
        else:
            if isinstance(data, Frame):
                if track and not data.tracker:
                    raise ValueError('Not a tracked message')
                msg = data
            else:
                msg = Frame(data, track=track)
            return _send_frame(self.handle, msg, flags)
           
cdef inline object _send_copy(void *handle, object msg, int flags=0): <======21
    rc = zmq_msg_init_size(&data, msg_c_len)
    _check_rc(rc)
   
    while True:
        with nogil:
            memcpy(zmq_msg_data(&data), msg_c, zmq_msg_size(&data))
            rc = zmq_msg_send(&data, handle, flags) <======22
            if not rc < 0:
                rc2 = zmq_msg_close(&data)
        try:
            _check_rc(rc)
        except InterruptedSystemCall:
            continue
        else:
            break
    _check_rc(rc2)

Thursday, July 23, 2015

How to Github fetch a branch from original repo and add to forked repo


I am going to add the branch " R2.1" from original repo https://github.com/JioCloud/contrail-vnc.git to forked repo https://github.com/sajuptpm/contrail-vnc.git

1)
$ git clone https://github.com/sajuptpm/contrail-vnc.git
$ cd contrail-vnc

2)
$ git remote add originalrepo https://github.com/JioCloud/contrail-vnc.git

3)
$ git fetch originalrepo

4)
$ git checkout -b R2.1 --track originalrepo/R2.1

5)
$ git push origin R2.1

6)
Goto github and check for branch "R2.1" in https://github.com/sajuptpm/contrail-vnc.git


Thursday, July 16, 2015

debian packaging: repo tool Tips and Tricks



http://xda-university.com/as-a-developer/repo-tips-tricks <==IMP

https://github.com/JioCloud/repoconf <=== Contains Manifest file "default.xml"

https://github.com/JioCloud/puppet-rjil/blob/master/build_scripts/override_packages.sh
build_scripts/override_packages.sh

#!/bin/bash
set -e

#
# if the repoconf_repo_source and/or repoconf_source_branch
# variables are set, it downloads the default.xml file from
# that remote/branch and uses it to create a package repo
# called new_repo.tgz. This was written to be integrated
# with jenkins to archive this repo for uses with jobs
# that can use that package archive with deploy.sh
#
if [ -n "${repoconf_repo_source}" ]; then
  repo_dir=`pwd`/'pkg_build'
  mkdir -p $repo_dir
  git config --global color.ui false
  pushd $repo_dir
  if [ -n "${repoconf_source_branch}" ]; then
    repo init -u $repoconf_repo_source -b $repoconf_source_branch
  else
    repo init -u $repoconf_repo_source
  fi
  repo sync
  # run majic autobuild command to create a pkg repo called foofil
  bash -x ./debian/sync-repo.sh build
  popd
  sbuild -n -d trusty -A *.dsc
  mkdir new_repo
  cp *.deb new_repo/
  pushd new_repo
  apt-ftparchive packages . > Packages
  tar -cvzf ../new_repo.tgz *
  popd
fi

debian packaging pushd and popd

build_scripts/override_packages.sh
https://github.com/JioCloud/puppet-rjil/blob/master/build_scripts/override_packages.sh

#!/bin/bash
set -e

#
# if the repoconf_repo_source and/or repoconf_source_branch
# variables are set, it downloads the default.xml file from
# that remote/branch and uses it to create a package repo
# called new_repo.tgz. This was written to be integrated
# with jenkins to archive this repo for uses with jobs
# that can use that package archive with deploy.sh
#
if [ -n "${repoconf_repo_source}" ]; then
  repo_dir=`pwd`/'pkg_build'
  mkdir -p $repo_dir
  git config --global color.ui false
  pushd $repo_dir
  if [ -n "${repoconf_source_branch}" ]; then
    repo init -u $repoconf_repo_source -b $repoconf_source_branch
  else
    repo init -u $repoconf_repo_source
  fi
  repo sync
  # run majic autobuild command to create a pkg repo called foofil
  bash -x ./debian/sync-repo.sh build
  popd
  sbuild -n -d trusty -A *.dsc
  mkdir new_repo
  cp *.deb new_repo/
  pushd new_repo
  apt-ftparchive packages . > Packages
  tar -cvzf ../new_repo.tgz *
  popd
fi

Tuesday, July 14, 2015

django-rest-framework How to disable admin-style browsable interface

Add following lines in settings.py


REST_FRAMEWORK = {
    'DEFAULT_RENDERER_CLASSES': (
        'rest_framework.renderers.JSONRenderer',
    )
}


Python TypeError: start_new_thread expected at least 2 arguments, got 1

>>>
>>> import thread
>>>
>>>
>>>
>>> def fun1():
...     print "hi"
...
>>>
>>>
>>>
>>> thread.start_new_thread(fun1)
Traceback (most recent call last):
  File "", line 1, in
TypeError: start_new_thread expected at least 2 arguments, got 1
>>>

>>>
>>>

Monday, July 13, 2015

How to OpenStack ec2-api Manual Installation

https://github.com/stackforge/ec2-api

1)
https://github.com/stackforge/ec2-api.git

2)
export OS_USERNAME=admin
export OS_PASSWORD=contrail123
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://10.140.218.21:35357/v2.0

3)
#sudo apt-get install python-pip

#sudo pip install virtualenv

#sudo pip install -r requirements.tx

4)
./install.sh

*To fix EnvironmentError: mysql_config not found
#sudo apt-get install libmysqlclient-dev

5)
After installation
#ls /usr/local/bin/ | grep ec2-
ec2-api
ec2-api-manage
ec2-api-metadata
ec2-api-s3

6)
/usr/local/bin/ec2-api

Error:
AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'SplitResult'

Fix:
#sudo gdebi python-six_1.9.0-3_all.deb

7)
/usr/local/bin/ec2-api

Error:
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. Are you sure that git is installed?

Fix:
* To fix this issue, goto cloned directory of ec2-api "#cd /home/saju/ec2-api"
* Then run "/usr/local/bin/ec2-api"

8)
#cd /home/saju/ec2-api

a)
ec2-api
saju@ubuntu:~/ec2-api$ /usr/local/bin/ec2-api
2015-06-01 18:44:11.687 29105 WARNING oslo_log.versionutils [-] Deprecated: WritableLogger() is deprecated as of Liberty and may be removed in M. It will not be superseded.
2015-06-01 18:44:11.689 29105 INFO ec2api.wsgi [-] ec2api listening on 0.0.0.0:8788
2015-06-01 18:44:11.693 29105 INFO ec2api.wsgi.server [-] (29105) wsgi starting up on http://0.0.0.0:8788/

b)
ec2-api-metadata
saju@ubuntu:~/ec2-api$ /usr/local/bin/ec2-api-metadata
2015-06-01 18:45:14.997 29116 WARNING oslo_log.versionutils [-] Deprecated: WritableLogger() is deprecated as of Liberty and may be removed in M. It will not be superseded.
2015-06-01 18:45:14.999 29116 INFO ec2api.wsgi [-] metadata listening on 0.0.0.0:8789
2015-06-01 18:45:15.004 29116 INFO ec2api.wsgi.server [-] (29116) wsgi starting up on http://0.0.0.0:8789/

c)
ec2-api-s3
saju@ubuntu:~/ec2-api$ /usr/local/bin/ec2-api-s3
2015-06-01 18:45:48.502 29128 WARNING oslo_log.versionutils [-] Deprecated: WritableLogger() is deprecated as of Liberty and may be removed in M. It will not be superseded.
2015-06-01 18:45:48.505 29128 INFO ec2api.wsgi [-] S3 Objectstore listening on 0.0.0.0:3334
2015-06-01 18:45:48.508 29128 INFO ec2api.wsgi.server [-] (29128) wsgi starting up on http://0.0.0.0:3334/

9)
a)
#cd /home/saju/ec2-api

b)
Open a screen session
#screen -S ec2api

c)
Create 3 screen tabs
Ctrl + a, then c --> 3 times

d)
Run "ec2-api", "ec2-api-metadata" and "ec2-api-s3" in each tab.
/usr/local/bin/ec2-api

/usr/local/bin/ec2-api-metadata

/usr/local/bin/ec2-api-s3

10)
a)
#sudo apt-get install awscli

b)
Run "keystone ec2-credentials-list" and copy access and secret keys

b)
#mkdir ~/.aws
#vim ~/.aws/config
[default]
aws_access_key_id = 32c4af37576c421b8dc59be822f6502e
aws_secret_access_key = 47810cb60434424caf622860d5c4e64c
region = nova

c)
#aws --endpoint-url http://10.0.2.15:8788/services/Cloud ec2 describe-instances

* Check the log in the window where /usr/local/bin/ec2-api is running.

d)
Error:
ERROR ec2api.api.ec2utils ValueError: time data '2015-06-01T14:11:26.400231' does not match format '%Y-%m-%dT%H:%M:%SZ'

Fix:
In /ec2api/api/__init__.py set "expired = False" in method "__call__" of class "Requestify".

e)
#aws --endpoint-url http://192.168.56.101:8788/services/Cloud ec2 describe-images
#aws --endpoint-url http://192.168.56.101:8788/services/Cloud ec2 describe-vpcs
#aws --endpoint-url http://192.168.56.101:8788/services/Cloud ec2 describe-route-tables
#aws --endpoint-url http://192.168.56.101:8788/services/Cloud ec2 create-vpc --cidr-block 10.4.4.0/24
#aws --endpoint-url http://192.168.56.101:8788/services/Cloud ec2 create-subnet --vpc-id vpc-e6d1d26a --cidr-block 10.4.4.0/24
#aws --endpoint-url http://192.168.56.101:8788/services/Cloud ec2 delete-subnet

openstack contrail can't ping to private ip, controller failed to collect VM data from ifmap

#sudo service contrail-schema status

#sudo tcpdump -i eth1 -s 0 -w dump1.pcap
#sudo tcpdump -i lo -s 0 -w dump1.pcap <=== To capture from contrail bigdebian package setup and open in wireshark.

xmpp_server_port:
tcp.port==5269

cassandra.server_port:
tcp.port==9160

ifmap_server_port:
tcp.port==8443

zk_server_port:
tcp.port==2181

bgp_port:
tcp.port==179

rabbit_port:
tcp.port==5672

To find a string within a packet, click on Edit > Find Packet. Under "Find By:" select "string" and enter your search string in the text entry box.There is a lot more information in most packets than what appears in the packet list Info column, so try "Packet details" and "Packet bytes".

################################

IFMAP Poll
=================
a)
schema-transformer poll from ifmap:
contrail-controller/src/config/schema-transformer/to_bgp.py:

b)
svc_monitor poll from ifmap:
contrail-controller/src/config/svc-monitor/svc_monitor/svc_monitor.py

c)
controller poll from ifmap:
contrail-controller/src/ifmap/client/ifmap_state_machine.cc <=== Contains ifmap client code
contrail-controller/src/ifmap/client/ifmap_channel.cc <=== contains code which set namespaces
contrail-controller/src/ifmap/client/test/ifmap_state_machine_test.cc

d)
contrail control ifmap poll, subscribe code:
contrail-controller/src/ifmap/client/ifmap_channel.cc
* Search for contrail:config-root:root

* void IFMapChannel::SendSubscribe()
* void IFMapChannel::SubscribeResponseWait()
* int IFMapChannel::ReadSubscribeResponseStr()

* void IFMapChannel::SendPollRequest()
* void IFMapChannel::PollResponseWait()
* int IFMapChannel::ReadPollResponse()

e)
Also check 
contrail-controller/src/ifmap/client/ifmap_state_machine.cc

* struct SendSubscribe : sc::state<SendSubscribe, IFMapStateMachine> {
* struct SubscribeResponseWait :

* struct ArcConnect : sc::state<ArcConnect, IFMapStateMachine>
* struct ArcSslHandshake : sc::state<ArcSslHandshake, IFMapStateMachine> {

* struct SendPoll : sc::state<SendPoll, IFMapStateMachine> {
* struct PollResponseWait :

* IFMapStateMachine::IFMapStateMachine(IFMapManager *manager)

f)
contrail-controller/src/ifmap/ifmap_server_parser.cc

* bool IFMapServerParser::ParseResultItem(
* void IFMapServerParser::ParseResults(
* bool IFMapServerParser::ParseMetadata(const pugi::xml_node &node,
* static DBRequest *IFMapServerRequestClone(const DBRequest *src) {

g)
contrail-controller/src/ifmap/ifmap_graph_walker.cc

* void IFMapGraphWalker::AddNodesToWhitelist() {
* void IFMapGraphWalker::AddLinksToWhitelist() {

################################

Official repo of ifmap-python-client:
https://github.com/ITI/ifmap-python-client/

contrail patch:
https://github.com/Juniper/contrail-third-party/blob/master/ifmap-python-async.patch

########################

xmllint

a)
sudo apt-get install xmllint
xmllint --format dump1.xml > new_dump1.xml
xmllint --format dump2.xml > new_dump2.xml

vimdiff new_dump1.xml new_dump2.xml


diff -y 1.xml 2.xm
diff -y --suppress-common-lines 1.xml 2.xml

#######################

Newly added lines in ifmap on schema start after vm creation.

a)
1513         <resultItem>
1514           <identity name="contrail:virtual-machine-interface:default-domain:testtenant:6d7ceaea-04a0-4067-8e25-91c2da6c804d" type="other" other-type-definition="extended"/>
1515           <identity name="contrail:routing-instance:default-domain:testtenant:mynw1:mynw1" type="other" other-type-definition="extended"/>
1516           <metadata>
1517             <contrail:virtual-machine-interface-routing-instance xmlns:contrail="http://www.contrailsystems.com/vnc_cfg.xsd" ifmap-cardinality="singleValue" ifmap-publisher-id="api-server-1--0000000001-1" ifmap-timestamp="2015-07-08T12:09:12+00:00">
1518               <direction>both</direction>
1519             </contrail:virtual-machine-interface-routing-instance>
1520           </metadata>
1521         </resultItem>

b)
class SchemaTransformer(object):
    def add_virtual_machine_interface_virtual_network(self, idents, meta):

c)
class VirtualMachineInterfaceST(DictST):
    def set_virtual_network(self, vn_name):
        if_obj = _vnc_lib.virtual_machine_interface_read(id=self.uuid) <===1
        ri = virtual_network.get_primary_routing_instance().obj
        refs = if_obj.get_routing_instance_refs()
        if ri.get_fq_name() not in [r['to'] for r in (refs or [])]:
            if_obj.add_routing_instance( <===2
                ri, PolicyBasedForwardingRuleType(direction="both")) <===3 meta <direction>both</direction>
            
#######################  

The publish operation associates metadata with identifiers or links between identifiers. The
search and subscribe operations use an identifier as the starting point for a query (see 3.7).

#######################

https://www.juniper.net/techpubs/en_US/release-independent/contrail/information-products/pathway-pages/api-server/vnc_cfg_api_server.ifmap.html

#######################

to_bgp.py methods which get invoke after vm creation.
idents['virtual-machine-interface'], identity
--------------------------------------------------

    def add_virtual_machine_interface_virtual_network(self, idents, meta):
        vmi_name = idents['virtual-machine-interface']
        vn_name = idents['virtual-network']
        vmi = VirtualMachineInterfaceST.locate(vmi_name)
        if vmi is not None:
            vmi.set_virtual_network(vn_name)
            self.current_network_set |= vmi.rebake()


    def add_instance_ip_virtual_machine_interface(self, idents, meta):
        vmi_name = idents['virtual-machine-interface']
        ip_name = idents['instance-ip']
        vmi = VirtualMachineInterfaceST.locate(vmi_name)
        if vmi is not None:
            vmi.add_instance_ip(ip_name)
            self.current_network_set |= vmi.rebake()

    def add_virtual_machine_interface_properties(self, idents, meta):
        vmi_name = idents['virtual-machine-interface']
        prop = VirtualMachineInterfacePropertiesType()
        vmi = VirtualMachineInterfaceST.locate(vmi_name)
        prop.build(meta)
        if vmi is not None:
            vmi.set_service_interface_type(prop.get_service_interface_type())
            self.current_network_set |= vmi.rebake()
            vmi.set_interface_mirror(prop.get_interface_mirror())

    def add_virtual_machine_interface_virtual_machine(self, idents, meta):
        vmi_name = idents['virtual-machine-interface']
        vm_name = idents['virtual-machine']
        vmi = VirtualMachineInterfaceST.locate(vmi_name)
        if vmi is not None:
            vmi.set_virtual_machine(vm_name)

###############################

$zgrep eef20dbe-4a78-4a90-af05-162aaf87cba4 *.gz > somefile

#############################

CleanupInterest:
controller/src/ifmap/ifmap_graph_walker.cc


IFMapGraphWalker::IFMapGraphWalker(DBGraph *graph, IFMapExporter *exporter)
    : graph_(graph),
      exporter_(exporter),
      work_queue_(TaskScheduler::GetInstance()->GetTaskId("db::DBTable"), 0,
                  boost::bind(&IFMapGraphWalker::Worker, this, _1)) {
    work_queue_.SetExitCallback(
        boost::bind(&IFMapGraphWalker::WorkBatchEnd, this, _1));<====1
    traversal_white_list_.reset(new IFMapTypenameWhiteList());
    AddNodesToWhitelist(); <====3
    AddLinksToWhitelist(); <====4
}

// Cleanup all graph nodes that a bit set in the remove mask (rm_mask_) but
// where not visited by the walker.
void IFMapGraphWalker::WorkBatchEnd(bool done) { <====2
    for (DBGraph::vertex_iterator iter = graph_->vertex_list_begin();
         iter != graph_->vertex_list_end(); ++iter) {
        DBGraphVertex *vertex = iter.operator->();
        CleanupInterest(vertex);
    }
    rm_mask_.clear();
}

void IFMapGraphWalker::CleanupInterest(DBGraphVertex *vertex) {
    // interest = interest - rm_mask_ + nmask
    IFMapNode *node = static_cast<IFMapNode *>(vertex);
    IFMapNodeState *state = exporter_->NodeStateLookup(node);
    if (state == NULL) {
        return;
    }

    if (!state->interest().empty() && !state->nmask().empty()) {
        IFMAP_DEBUG(CleanupInterest, node->ToString(),
                    state->interest().ToString(), rm_mask_.ToString(),
                    state->nmask().ToString());
    }

    .... ....
    .... ....
}

#############################

IFMapClientSendInfo:
controller/src/ifmap/ifmap_update_sender.cc

#############################

http://lists.opencontrail.org/pipermail/dev_lists.opencontrail.org/2014-December/001782.html

> 2014-12-05 00:56:01,783 [main] INFO   - Starting irond version 0.3.2...
> 2014-12-05 00:56:05,582 [main] INFO   - EventProcessor: Running with 4 workers and 2 forwarders
> 2014-12-05 00:56:05,587 [main] INFO   - ActionProcessor: Running with 1 workers and 1 forwarders
> 2014-12-05 00:56:05,587 [main] INFO   - ChannelAcceptor: Listening on port 8443 for incoming basic authentication connections
> 2014-12-05 00:56:05,590 [main] INFO   - ChannelAcceptor: Listening on port 8444 for incoming certificate-based authentication connections
> 2014-12-05 00:56:05,595 [main] INFO   - irond is running :-)
> 2014-12-05 00:56:06,060 [pool-1-thread-1] FATAL  - Could not store generated publisher-id for api-server
> 2014-12-05 00:56:18,139 [pool-1-thread-1] FATAL  - Could not store generated publisher-id for control
> 2014-12-05 00:56:24,442 [pool-1-thread-3] FATAL  - Could not store generated publisher-id for schema-transformer

########### IMP-1 ##################

a)
All routing instances of networks in all projects get cleaned up on : 2015-07-01 Wed

#zgrep 'CleanupInterest: routing-instance:default-domain' contrail-control.log-20150702.gz
#zgrep 'CleanupInterest: routing-instance:default-domain' contrail-control.log-20150702.gz  | grep jiocloud_pro

b)
Before above error (a), I can see some "IFMapXmppVmSubUnsub" errro in contrail-control.log-20150702.gz on same date 2015-07-01 Wed.

#zgrep 'IFMapXmppVmSubUnsub: VmUnsubscribe ct1-production' contrail-control.log-20150702.gz

===========

#zgrep 'Error received instead of PollResult' *.gz

###########IMP-2##################

1)
=======
a)
Down ifmap server

b)
Create a VM
* Note Id of the VM

c)
Check
#curl http://127.0.0.1:8083/Snh_IFMapPendingVmRegReq
* vm id should be there.
* In production setup, we have seen id of bugy vm in this list.

d)
* Run "#nova show <vm-id>" and find the private ip of vm.
* Run "#neutron port-list" and find the port-id and construct interface name like "tap<fisr_11_char_of_port_id>" eg:tapce6e07bc-66.

e)
Search for vm's id in /var/log/contrail/contrail-control.log

* You should see an entry like 
2015-07-11 Sat 10:44:44:000.010 UTC  ct1-testjenkins-puppet-rjil-gate-1659 [Thread 140043615483648, Pid 20567]: Sandesh: Send: FAILED: 1436611483999988 IFMapXMPP [SYS_DEBUG]: IFMapXmppVmSubUnsub: VmSubscribe ct1-testjenkins-puppet-rjil-gate-1659:10.0.0.70 08f221ab-72f3-46c4-b823-f7f3a2097b5d controller/src/ifmap/ifmap_xmpp.cc 216

f)
Go to compute node of the VM and check tap interface "tapce6e07bc-66"
#curl http://127.0.0.1:8085/Snh_ItfReq?x=tapce6e07bc-66

* You can see that vrf_name of interface "tapce6e07bc-66" is --ERROR--
<vrf_name type="string" identifier="4" link="VrfListReq">--ERROR--</vrf_name>

g)
Search result of bugy vm_id form production contrail-control.log-20150702.gz

g1)
* VM created on 2015-07-01 Wed 08:56
* First VmSubscribe request from agent on 2015-07-01 Wed 08:56:

2015-07-01 Wed 08:56:17:122.262 UTC  ct1-production [Thread 140105121527552, Pid 48037]: Sandesh: Send: FAILED: 1435740977122235 IFMapXMPP [SYS_DEBUG]: IFMapXmppVmSubUnsub: VmSubscribe ct1-production:10.140.192.96 0d4722d7-c1d9-421c-94eb-eaf278597226 controller/src/ifmap/ifmap_xmpp.cc 216

g2)
* (after one day) Second VmSubscribe request from agent on 2015-07-02 Thu 02:48:
* I think, something got restared or fixed on 2015-07-02 Thu 02:48:

2015-07-02 Thu 02:48:26:590.027 UTC  ct1-production [Thread 139899038537472, Pid 61256]: Sandesh: Send: FAILED: 1435805306590019 IFMapXMPP [SYS_DEBUG]: IFMapXmppVmSubUnsub: VmSubscribe ct1-production:10.140.192.96 0d4722d7-c1d9-421c-94eb-eaf278597226 controller/src/ifmap/ifmap_xmpp.cc 216

g3)
* Got data from ifmap on 2015-07-02 Thu 02:48:

2015-07-02 Thu 02:48:28:057.616 UTC  ct1-production [Thread 139899379013376, Pid 61256]: Sandesh: Send: FAILED: 1435805308024459 IFMapPeer [SYS_DEBUG]: IFMapServerConnection: 0 bytes in reply_. 8843481 bytes in reply_str. PollResponse message is:
 HTTP/1.1 200 ^M
Content-Type: application/soap+xml^M
Content-Length: 8843403^M
</resultItem><resultItem><identity name="contrail:virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226" type="other" other-type-definition="extend

2015-07-02 Thu 02:48:28:485.031 UTC  ct1-production [Thread 139899383211776, Pid 61256]: Sandesh: Send: FAILED: 1435805308485025 IFMap [SYS_DEBUG]: IFMapNodeOperation: Creating virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226 controller/src/ifmap/ifmap_server_table.cc 89

2015-07-02 Thu 02:48:28:565.594 UTC  ct1-production [Thread 139899383211776, Pid 61256]: Sandesh: Send: FAILED: 1435805308565585 IFMap [SYS_DEBUG]: IFMapLinkOperation: Creating link <virtual-machine-interface:default-domain:openstack:eef20dbe-4a78-4a90-af05-162aaf87cba4,virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226> controller/src/ifmap/ifmap_link_table.cc 67

2015-07-02 Thu 02:48:29:253.401 UTC  ct1-production [Thread 139898275215104, Pid 61256]: Sandesh: Send: FAILED: 1435805309253389 IFMap [SYS_DEBUG]: IFMapLinkOperation: Creating link <virtual-router:default-global-system-config:cp4-production,virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226> controller/src/ifmap/ifmap_link_table.cc 67

2015-07-02 Thu 02:48:30:512.098 UTC  ct1-production [Thread 139898292008704, Pid 61256]: Sandesh: Send: FAILED: 1435805310512089 IFMap [SYS_DEBUG]: LinkOper: LinkAdd virtual-router:default-global-system-config:cp4-production - virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226 , lhs: 000000000000000001 , rhs:  controller/src/ifmap/ifmap_graph_walker.cc 106
2015-07-02 Thu 02:48:30:514.012 UTC  ct1-production [Thread 139898292008704, Pid 61256]: Sandesh: Send: FAILED: 1435805310514003 IFMap [SYS_DEBUG]: JoinVertex: JoinVertex: virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226 controller/src/ifmap/ifmap_graph_walker.cc 81

2015-07-02 Thu 02:48:30:583.899 UTC  ct1-production [Thread 139898275215104, Pid 61256]: Sandesh: Send: FAILED: 1435805310583891 IFMap [SYS_DEBUG]: IFMapClientSendInfo: Sent Update of virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226 to vRouter ct1-production:10.140.192.96 controller/src/ifmap/ifmap_update_sender.cc 315

2015-07-02 Thu 02:48:30:583.938 UTC  ct1-production [Thread 139898275215104, Pid 61256]: Sandesh: Send: FAILED: 1435805310583928 IFMap [SYS_DEBUG]: IFMapClientSendInfo: Sent Update of link <virtual-router:default-global-system-config:cp4-production,virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226> to vRouter ct1-production:10.140.192.96 controller/src/ifmap/ifmap_update_sender.cc 315

2015-07-02 Thu 02:48:31:702.033 UTC  ct1-production [Thread 139899097315072, Pid 61256]: Sandesh: Send: FAILED: 1435805311702025 IFMap [SYS_DEBUG]: IFMapClientSendInfo: Sent Update of link <virtual-machine-interface:default-domain:openstack:eef20dbe-4a78-4a90-af05-162aaf87cba4,virtual-machine:0d4722d7-c1d9-421c-94eb-eaf278597226> to vRouter ct1-production:10.140.192.96 controller/src/ifmap/ifmap_update_sender.cc 315

h)
* Search for "2015-07-02 Thu 02:48" in production contrail-control.log-20150702.gz
* You will get following clues

h0)

* This error you can see from date "2015-07-01 Wed 06:47" to "2015-07-02 Thu 02:48",  until issue got resolved.
* -(-1) OnSessionEvent TCP Connect Failed
* -(-1):EvTcpConnectFail
* From Thread 140105745336256, Pid 48037

2015-07-02 Thu 02:46:26:114.552 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Connect : EvTcpConnectFail
2015-07-02 Thu 02:46:26:114.637 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Idle
2015-07-02 Thu 02:46:26:114.701 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Processing scm::EvSandeshSend in state Idle
2015-07-02 Thu 02:46:26:114.717 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Wrong state: Idle for event: EvSandeshSend
2015-07-02 Thu 02:46:26:114.731 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Processing scm::EvSandeshSend in state Idle
2015-07-02 Thu 02:46:26:114.743 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Wrong state: Idle for event: EvSandeshSend
2015-07-02 Thu 02:46:26:114.755 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Processing scm::EvTcpDeleteSession in state Idle
2015-07-02 Thu 02:46:26:114.773 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Processing scm::EvSandeshSend in state Idle
2015-07-02 Thu 02:46:26:114.785 UTC  ct1-production [Thread 140105075345152, Pid 48037]: Wrong state: Idle for event: EvSandeshSend
2015-07-02 Thu 02:46:36:419.952 UTC  ct1-production [Thread 140105121527552, Pid 48037]: Processing scm::EvSandeshSend in state Idle
2015-07-02 Thu 02:46:36:420.056 UTC  ct1-production [Thread 140105121527552, Pid 48037]: Wrong state: Idle for event: EvSandeshSend
2015-07-02 Thu 02:46:44:318.227 UTC  ct1-production [Thread 140105573189376, Pid 48037]: Processing scm::EvSandeshSend in state Idle
2015-07-02 Thu 02:46:44:318.320 UTC  ct1-production [Thread 140105573189376, Pid 48037]: Wrong state: Idle for event: EvSandeshSend
2015-07-02 Thu 02:46:44:318.369 UTC  ct1-production [Thread 140105573189376, Pid 48037]: Processing scm::EvSandeshSend in state Idle
2015-07-02 Thu 02:46:44:318.410 UTC  ct1-production [Thread 140105573189376, Pid 48037]: Wrong state: Idle for event: EvSandeshSend
2015-07-02 Thu 02:46:56:114.903 UTC  ct1-production [Thread 140105066948352, Pid 48037]: Processing scm::EvIdleHoldTimerExpired in state Idle
2015-07-02 Thu 02:46:56:115.232 UTC  ct1-production [Thread 140105066948352, Pid 48037]: Connect : Start Connect timer 10.140.192.40:8086
2015-07-02 Thu 02:46:56:115.266 UTC  ct1-production [Thread 140105745336256, Pid 48037]: -(-1) OnSessionEvent TCP Connect Failed
2015-07-02 Thu 02:46:56:115.352 UTC  ct1-production [Thread 140105745336256, Pid 48037]: -(-1):EvTcpConnectFail
2015-07-02 Thu 02:46:56:115.417 UTC  ct1-production [Thread 140105066948352, Pid 48037]: Processing scm::EvSandeshSend in state Connect
2015-07-02 Thu 02:46:56:115.457 UTC  ct1-production [Thread 140105066948352, Pid 48037]: Wrong state: Connect for event: EvSandeshSend
2015-07-02 Thu 02:46:56:115.488 UTC  ct1-production [Thread 140105066948352, Pid 48037]: Processing scm::EvTcpConnectFail in state Connect
2015-07-02 Thu 02:46:56:115.516 UTC  ct1-production [Thread 140105066948352, Pid 48037]: Connect : EvTcpConnectFail
2015-07-02 Thu 02:46:56:115.790 UTC  ct1-production [Thread 140105066948352, Pid 48037]: Idle
2015-07-02 Thu 02:46:56:115.932 UTC  ct1-production [Thread 140105066948352, Pid 48037]: Processing scm::EvSandeshSend in state Idle

h01)
* Thread 140105745336256, Pid 48037 <=== IMP

2015-07-01 Wed 13:35:54:287.037 UTC  ct1-production [Thread 140105745336256, Pid 48037]: -(-1) OnSessionEvent TCP Connect Failed
2015-07-01 Wed 13:35:54:287.124 UTC  ct1-production [Thread 140105745336256, Pid 48037]: -(-1):EvTcpConnectFail

2015-07-01 Wed 13:35:59:390.062 UTC  ct1-production [Thread 140105745336256, Pid 48037]: Sandesh: Send: FAILED: 1435757759390034 XMPP [SYS_DEBUG]: XmppCreateConnection: Xmpp creating dynamic channel 10.140.192.142 controller/src/xmpp/xmpp_server.cc 273
2015-07-01 Wed 13:35:59:390.211 UTC  ct1-production [Thread 140105745336256, Pid 48037]: Sandesh: Send: FAILED: 1435757759390195 XMPP [SYS_INFO]: XmppConnectionCreate: Created Xmpp  Server  connection from  ct1-production  To :   controller/src/xmpp/xmpp_connection.cc 449

* This error resolved with "http socket open failed" error in following thread and some service restart.

2015-07-02 Thu 02:47:47:784.263 UTC  ct1-production [Thread 140105568990976, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:47:52:784.758 UTC  ct1-production [Thread 140105117329152, Pid 48037]: http socket open failed: system:24

2015-07-02 Thu 02:47:56:946.014 UTC  ct1-production [Thread 140105108932352, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:47:58:784.978 UTC  ct1-production [Thread 140105092138752, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:01:946.133 UTC  ct1-production [Thread 140105062749952, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:06:169.145 UTC  ct1-production [Thread 140105100535552, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:06:784.386 UTC  ct1-production [Thread 140105100535552, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:07:945.701 UTC  ct1-production [Thread 140105573189376, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:11:169.662 UTC  ct1-production [Thread 140105108932352, Pid 48037]: http socket open failed: system:24

2015-07-02 Thu 02:48:15:945.382 UTC  ct1-production [Thread 140105573189376, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:17:170.720 UTC  ct1-production [Thread 140105568990976, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:18:784.435 UTC  ct1-production [Thread 140105071146752, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:25:171.870 UTC  ct1-production [Thread 140105071146752, Pid 48037]: http socket open failed: system:24


h1)
2015-07-02 Thu 02:47:47:704.928 UTC  ct1-production [Thread 140105745336256, Pid 48037]: Sandesh: Send: FAILED: 1435805267704889 XMPP [SYS_DEBUG]: XmppCreateConnection: Xmpp creating dynamic channel 10.140.192.34 controller/src/xmpp/xmpp_server.cc 273
2015-07-02 Thu 02:47:47:705.155 UTC  ct1-production [Thread 140105745336256, Pid 48037]: Sandesh: Send: FAILED: 1435805267705131 XMPP [SYS_INFO]: XmppConnectionCreate: Created Xmpp  Server  connection from  ct1-production  To :   controller/src/xmpp/xmpp_connection.cc 449
2015-07-02 Thu 02:47:47:784.263 UTC  ct1-production [Thread 140105568990976, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:47:52:784.758 UTC  ct1-production [Thread 140105117329152, Pid 48037]: http socket open failed: system:24

h2)
2015-07-02 Thu 02:48:15:945.382 UTC  ct1-production [Thread 140105573189376, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:17:170.720 UTC  ct1-production [Thread 140105568990976, Pid 48037]: http socket open failed: system:24
2015-07-02 Thu 02:48:18:784.435 UTC  ct1-production [Thread 140105071146752, Pid 48037]: http socket open failed: system:24

h3)
* Starting Bgp Server at port 179

2015-07-02 Thu 02:48:26:173.363 UTC  ct1-production [Thread 139899567953856, Pid 61256]: Starting Bgp Server at port 179
2015-07-02 Thu 02:48:26:175.422 UTC  ct1-production [Thread 139899567953856, Pid 61256]: DiscoveryClientMsg: publish/ct1-production xmpp-server <xmpp-server><ip-address>10.140.192.40</ip-address><port>5269</port></xmpp-server> controller/src/discovery/client/discovery_client.cc 362
2015-07-02 Thu 02:48:26:177.925 UTC  ct1-production [Thread 139899567953856, Pid 61256]: SANDESH: ROLE             : Generator
2015-07-02 Thu 02:48:26:177.949 UTC  ct1-production [Thread 139899567953856, Pid 61256]: SANDESH: MODULE           : ControlNode
2015-07-02 Thu 02:48:26:177.959 UTC  ct1-production [Thread 139899567953856, Pid 61256]: SANDESH: SOURCE           : ct1-production
2015-07-02 Thu 02:48:26:177.969 UTC  ct1-production [Thread 139899567953856, Pid 61256]: SANDESH: NODE TYPE        : Control
2015-07-02 Thu 02:48:26:177.978 UTC  ct1-production [Thread 139899567953856, Pid 61256]: SANDESH: INSTANCE ID      : 0
2015-07-02 Thu 02:48:26:177.988 UTC  ct1-production [Thread 139899567953856, Pid 61256]: SANDESH: HTTP SERVER PORT : 8083
2015-07-02 Thu 02:48:26:182.747 UTC  ct1-production [Thread 139899567953856, Pid 61256]: HTTP Introspect Init
2015-07-02 Thu 02:48:26:183.051 UTC  ct1-production [Thread 139899567953856, Pid 61256]: Sandesh: Send: No client: 1435805306183029 TCP [SYS_DEBUG]: TcpServerMessageLog: Server 0.0.0.0:8083  Initialization complete controller/src/io/tcp_server.cc 100
2015-07-02 Thu 02:48:26:183.124 UTC  ct1-production [Thread 139899567953856, Pid 61256]: Sandesh Http Server Port 8083
20

h4)
* Note event change form Idle ===> ServerResolve  ===> SsrcConnect ===> SsrcSslHandshake ===> SendNewSession ===> NewSessionResponseWait

SYS_DEBUG]: IFMapSmEventMessage: Processing ifsm::EvStart in state Idle controller/src/ifmap/client/ifmap_state_machine.cc 944
2015-07-02 Thu 02:48:26:195.302 UTC  ct1-production [Thread 139899088918272, Pid 61256]: Sandesh: Send: FAILED: 1435805306195291 IFMapStateMachine [SYS_DEBUG]: IFMapSmTransitionMessage: Idle ===> ServerResolve controller/src/ifmap/client/ifmap_state_machine.cc 954
2015-07-02 Thu 02:48:26:195.350 UTC  ct1-production [Thread 139899088918272, Pid 61256]: Sandesh: Send: FAILED: 1435805306195340 IFMapStateMachine [SYS_DEBUG]: IFMapSmStartTimerMessage: 5 second response timer started. controller/src/ifmap/client/ifmap_state_machine.cc 734

2015-07-02 Thu 02:48:26:198.877 UTC  ct1-production [Thread 139899395806976, Pid 61256]: Sandesh: Send: FAILED: 1435805306198861 IFMapStateMachine [SYS_DEBUG]: IFMapSmTransitionMessage: ServerResolve ===> SsrcConnect controller/src/ifmap/client/ifmap_state_machine.cc 954
2015-07-02 Thu 02:48:26:198.921 UTC  ct1-production [Thread 139899395806976, Pid 61256]: Sandesh: Send: FAILED: 1435805306198907 IFMapStateMachine [SYS_DEBUG]: IFMapSmStartTimerMessage: 5 second response timer started. controller/src/ifmap/client/ifmap_state_machine.cc 734

2015-07-02 Thu 02:48:26:200.457 UTC  ct1-production [Thread 139898279413504, Pid 61256]: Sandesh: Send: FAILED: 1435805306200445 IFMapStateMachine [SYS_DEBUG]: IFMapSmTransitionMessage: SsrcConnect ===> SsrcSslHandshake controller/src/ifmap/client/ifmap_state_machine.cc 954

2015-07-02 Thu 02:48:26:212.918 UTC  ct1-production [Thread 139898283611904, Pid 61256]: Sandesh: Send: FAILED: 1435805306212906 IFMapStateMachine [SYS_DEBUG]: IFMapSmTransitionMessage: SsrcSslHandshake ===> SendNewSession controller/src/ifmap/client/ifmap_state_machine.cc 954

015-07-02 Thu 02:48:26:213.195 UTC  ct1-production [Thread 139899038537472, Pid 61256]: Sandesh: Send: FAILED: 1435805306213181 IFMapStateMachine [SYS_DEBUG]: IFMapSmTransitionMessage: SendNewSession ===> NewSessionResponseWait controller/src/ifmap/client/ifmap_state_machine.cc 954

h5)
* See (h1) "2015-07-02 Thu 02:47:47:704.928 UTC" there we can see same log but "http socket open failed: system:24" error after that.

2015-07-02 Thu 02:48:26:198.933 UTC  ct1-production [Thread 139899567953856, Pid 61256]: Sandesh: Send: FAILED: 1435805306198916 XMPP [SYS_DEBUG]: XmppCreateConnection: Xmpp creating dynamic channel 10.140.192.50 controller/src/xmpp/xmpp_server.cc 273
2015-07-02 Thu 02:48:26:199.037 UTC  ct1-production [Thread 139899567953856, Pid 61256]: Sandesh: Send: FAILED: 1435805306199021 XMPP [SYS_INFO]: XmppConnectionCreate: Created Xmpp  Server  connection from  ct1-production  To :   controller/src/xmpp/xmpp_connection.cc 449

h6)
2015-07-02 Thu 02:48:26:311.040 UTC  ct1-production [Thread 139899404203776, Pid 61256]: Sandesh: Send: FAILED: 1435805306311021 IFMap [SYS_DEBUG]: IFMapServerClientRegUnreg: Register request for client  default-global-system-config:cp21-production controller/src/ifmap/ifmap_server.cc 215

h7)
* VrSubscribe and VmSubscribe

2015-07-02 Thu 02:48:26:311.180 UTC  ct1-production [Thread 139899404203776, Pid 61256]: Sandesh: Send: FAILED: 1435805306311168 IFMapXMPP [SYS_DEBUG]: IFMapXmppVrSubUnsub: VrSubscribe ct1-production:10.140.192.145 controller/src/ifmap/ifmap_xmpp.cc 199

2015-07-02 Thu 02:48:26:311.246 UTC  ct1-production [Thread 139899404203776, Pid 61256]: Sandesh: Send: FAILED: 1435805306311233 IFMapXMPP [SYS_DEBUG]: IFMapXmppVmSubUnsub: VmSubscribe ct1-production:10.140.192.145 2ede789b-f57b-4756-bcd2-849441f53937 controller/src/ifmap/ifmap_xmpp.cc 216

2015-07-02 Thu 02:48:26:324.149 UTC  ct1-production [Thread 139899063727872, Pid 61256]: Sandesh: Send: FAILED: 1435805306324132 IFMapXMPP [SYS_DEBUG]: IFMapXmppVmSubUnsub: VmSubscribe ct1-production:10.140.192.145 d34097f0-c070-495a-97c7-eccddea1cdad controller/src/ifmap/ifmap_xmpp.cc 216

... ....

h8)
* resource not found when processing subscribe

2015-07-02 Thu 02:48:26:324.552 UTC  ct1-production [Thread 139899067926272, Pid 61256]: Sandesh: Send: FAILED: 1435805306324540 BGP [SYS_WARN]: XmppPeerMembershipLog: XMPP Peer ct1-production:10.140.192.145  Routing Instance default-domain:217882181541:default-net:default-net not found when processing subscribe controller/src/bgp/bgp_xmpp_channel.cc 1679

2015-07-02 Thu 02:48:26:324.659 UTC  ct1-production [Thread 139899067926272, Pid 61256]: Sandesh: Send: FAILED: 1435805306324646 BGP [SYS_WARN]: XmppPeerMembershipLog: XMPP Peer ct1-production:10.140.192.145  Routing Instance default-domain:248775291331:default-net:default-net not found when processing subscribe controller/src/bgp/bgp_xmpp_channel.cc 1679

2015-07-02 Thu 02:48:26:324.834 UTC  ct1-production [Thread 139899067926272, Pid 61256]: Sandesh: Send: FAILED: 1435805306324821 BGP [SYS_WARN]: XmppPeerMembershipLog: XMPP Peer ct1-production:10.140.192.145  Routing Instance default-domain:jenkins:at_private:at_private not found when processing subscribe controller/src/bgp/bgp_xmpp_channel.cc 1679

2015-07-02 Thu 02:48:26:324.952 UTC  ct1-production [Thread 139899067926272, Pid 61256]: Sandesh: Send: FAILED: 1435805306324941 BGP [SYS_WARN]: XmppPeerMembershipLog: XMPP Peer ct1-production:10.140.192.145  Routing Instance default-domain:jiocloud_compute_pm:network1:network1 not found when processing subscribe controller/src/bgp/bgp_xmpp_channel.cc 1679

... ...

h9)
Search for "Too many open files" in contrail-control.log-20150702.gz to find the reason of error "http socket open failed: system:24"
http://stackoverflow.com/questions/880557/socket-accept-too-many-open-files

2015-07-02 Thu 02:47:56:117.055 UTC  ct1-production [Thread 140105121527552, Pid 48037]: CreateSMSession Open FAILED: Too many open files

#############################



Friday, July 10, 2015

contrail ifmap restart and message loss

The IF-MAP is repopulated by the API server whenever the session between the API server and irond is reset.
You can verify this by restarting irond.
Redis is used by the analytics api to cache data related to queries. It is not used by the configuration
components (api, schema, discovery, etc).

http://comments.gmane.org/gmane.comp.networking.opencontrail.devel/29

Tuesday, July 7, 2015

contrail ifmap-python-client

ifmap official repo:
https://github.com/ITI/ifmap-python-client
https://github.com/ITI/ifmap-python-client/blob/master/ifmap/client.py

Contrail builds ifmap-python-client from above repo and apply some patches:
https://github.com/Juniper/contrail-packages/tree/master/debian/ifmap-python-client/debian
https://github.com/Juniper/contrail-packages/tree/master/debian/ifmap-python-client/debian/patches

Contrail keeps patches in this repo:
https://github.com/Juniper/contrail-third-party
https://github.com/Juniper/contrail-third-party/blob/master/ifmap-python-async.patch

Notes:
https://github.com/sajuptpm/mytools/blob/master/opc/ifmap/api_server_and_ifmap_client.txt

ifmap Poll and subscribe
-----------------

a)
schema-transformer poll from ifmap:
contrail-controller/src/config/schema-transformer/to_bgp.py:

b)
svc_monitor poll from ifmap:
contrail-controller/src/config/svc-monitor/svc_monitor/svc_monitor.py

c)
controller poll from ifmap:
contrail-controller/src/ifmap/client/ifmap_state_machine.cc <=== Contains ifmap client code
contrail-controller/src/ifmap/client/ifmap_channel.cc <=== contains code which set namespaces
contrail-controller/src/ifmap/client/test/ifmap_state_machine_test.cc

d)
contrail control ifmap poll, subscribe code:
contrail-controller/src/ifmap/client/ifmap_channel.cc
* Search for contrail:config-root:root

* void IFMapChannel::SendSubscribe()
* void IFMapChannel::SubscribeResponseWait()
* int IFMapChannel::ReadSubscribeResponseStr()

* void IFMapChannel::SendPollRequest()
* void IFMapChannel::PollResponseWait()
* int IFMapChannel::ReadPollResponse()

e)
Also check
contrail-controller/src/ifmap/client/ifmap_state_machine.cc

* struct SendSubscribe : sc::state {
* struct SubscribeResponseWait :

* struct ArcConnect : sc::state
* struct ArcSslHandshake : sc::state {

* struct SendPoll : sc::state {
* struct PollResponseWait :

* IFMapStateMachine::IFMapStateMachine(IFMapManager *manager)

f)
Find diff
------------
sudo apt-get install xmllint
xmllint --format dump1.xml > new_dump1.xml
xmllint --format dump2.xml > new_dump2.xml

vimdiff new_dump1.xml new_dump2.xml

g)

Monday, July 6, 2015

Lis of contrail vnc ifmap client methods which CRUD ifmap server data

saju@ubuntu:/usr/lib/python2.7/dist-packages$ cat ./vnc_api/gen/vnc_ifmap_client_gen.py | grep "def "
    def __init__(self):
    def _ifmap_domain_alloc(self, parent_type, fq_name):
    def _ifmap_domain_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_domain_create(self, obj_ids, obj_dict):
    def _ifmap_domain_read(self, ifmap_id, field_names = None):
    def _ifmap_domain_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_domain_update(self, ifmap_id, new_obj_dict):
    def _ifmap_domain_list(self, parent_type, parent_fq_name):
    def _ifmap_domain_delete(self, obj_ids):
    def _ifmap_global_vrouter_config_alloc(self, parent_type, fq_name):
    def _ifmap_global_vrouter_config_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_global_vrouter_config_create(self, obj_ids, obj_dict):
    def _ifmap_global_vrouter_config_read(self, ifmap_id, field_names = None):
    def _ifmap_global_vrouter_config_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_global_vrouter_config_update(self, ifmap_id, new_obj_dict):
    def _ifmap_global_vrouter_config_list(self, parent_type, parent_fq_name):
    def _ifmap_global_vrouter_config_delete(self, obj_ids):
    def _ifmap_instance_ip_alloc(self, parent_type, fq_name):
    def _ifmap_instance_ip_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_instance_ip_create(self, obj_ids, obj_dict):
    def _ifmap_instance_ip_read(self, ifmap_id, field_names = None):
    def _ifmap_instance_ip_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_instance_ip_update(self, ifmap_id, new_obj_dict):
    def _ifmap_instance_ip_list(self, parent_type, parent_fq_name):
    def _ifmap_instance_ip_delete(self, obj_ids):
    def _ifmap_network_policy_alloc(self, parent_type, fq_name):
    def _ifmap_network_policy_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_network_policy_create(self, obj_ids, obj_dict):
    def _ifmap_network_policy_read(self, ifmap_id, field_names = None):
    def _ifmap_network_policy_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_network_policy_update(self, ifmap_id, new_obj_dict):
    def _ifmap_network_policy_list(self, parent_type, parent_fq_name):
    def _ifmap_network_policy_delete(self, obj_ids):
    def _ifmap_virtual_DNS_record_alloc(self, parent_type, fq_name):
    def _ifmap_virtual_DNS_record_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_virtual_DNS_record_create(self, obj_ids, obj_dict):
    def _ifmap_virtual_DNS_record_read(self, ifmap_id, field_names = None):
    def _ifmap_virtual_DNS_record_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_virtual_DNS_record_update(self, ifmap_id, new_obj_dict):
    def _ifmap_virtual_DNS_record_list(self, parent_type, parent_fq_name):
    def _ifmap_virtual_DNS_record_delete(self, obj_ids):
    def _ifmap_route_target_alloc(self, parent_type, fq_name):
    def _ifmap_route_target_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_route_target_create(self, obj_ids, obj_dict):
    def _ifmap_route_target_read(self, ifmap_id, field_names = None):
    def _ifmap_route_target_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_route_target_update(self, ifmap_id, new_obj_dict):
    def _ifmap_route_target_list(self, parent_type, parent_fq_name):
    def _ifmap_route_target_delete(self, obj_ids):
    def _ifmap_floating_ip_alloc(self, parent_type, fq_name):
    def _ifmap_floating_ip_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_floating_ip_create(self, obj_ids, obj_dict):
    def _ifmap_floating_ip_read(self, ifmap_id, field_names = None):
    def _ifmap_floating_ip_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_floating_ip_update(self, ifmap_id, new_obj_dict):
    def _ifmap_floating_ip_list(self, parent_type, parent_fq_name):
    def _ifmap_floating_ip_delete(self, obj_ids):
    def _ifmap_floating_ip_pool_alloc(self, parent_type, fq_name):
    def _ifmap_floating_ip_pool_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_floating_ip_pool_create(self, obj_ids, obj_dict):
    def _ifmap_floating_ip_pool_read(self, ifmap_id, field_names = None):
    def _ifmap_floating_ip_pool_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_floating_ip_pool_update(self, ifmap_id, new_obj_dict):
    def _ifmap_floating_ip_pool_list(self, parent_type, parent_fq_name):
    def _ifmap_floating_ip_pool_delete(self, obj_ids):
    def _ifmap_physical_router_alloc(self, parent_type, fq_name):
    def _ifmap_physical_router_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_physical_router_create(self, obj_ids, obj_dict):
    def _ifmap_physical_router_read(self, ifmap_id, field_names = None):
    def _ifmap_physical_router_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_physical_router_update(self, ifmap_id, new_obj_dict):
    def _ifmap_physical_router_list(self, parent_type, parent_fq_name):
    def _ifmap_physical_router_delete(self, obj_ids):
    def _ifmap_bgp_router_alloc(self, parent_type, fq_name):
    def _ifmap_bgp_router_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_bgp_router_create(self, obj_ids, obj_dict):
    def _ifmap_bgp_router_read(self, ifmap_id, field_names = None):
    def _ifmap_bgp_router_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_bgp_router_update(self, ifmap_id, new_obj_dict):
    def _ifmap_bgp_router_list(self, parent_type, parent_fq_name):
    def _ifmap_bgp_router_delete(self, obj_ids):
    def _ifmap_virtual_router_alloc(self, parent_type, fq_name):
    def _ifmap_virtual_router_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_virtual_router_create(self, obj_ids, obj_dict):
    def _ifmap_virtual_router_read(self, ifmap_id, field_names = None):
    def _ifmap_virtual_router_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_virtual_router_update(self, ifmap_id, new_obj_dict):
    def _ifmap_virtual_router_list(self, parent_type, parent_fq_name):
    def _ifmap_virtual_router_delete(self, obj_ids):
    def _ifmap_config_root_alloc(self, parent_type, fq_name):
    def _ifmap_config_root_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_config_root_create(self, obj_ids, obj_dict):
    def _ifmap_config_root_read(self, ifmap_id, field_names = None):
    def _ifmap_config_root_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_config_root_update(self, ifmap_id, new_obj_dict):
    def _ifmap_config_root_list(self, parent_type, parent_fq_name):
    def _ifmap_config_root_delete(self, obj_ids):
    def _ifmap_subnet_alloc(self, parent_type, fq_name):
    def _ifmap_subnet_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_subnet_create(self, obj_ids, obj_dict):
    def _ifmap_subnet_read(self, ifmap_id, field_names = None):
    def _ifmap_subnet_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_subnet_update(self, ifmap_id, new_obj_dict):
    def _ifmap_subnet_list(self, parent_type, parent_fq_name):
    def _ifmap_subnet_delete(self, obj_ids):
    def _ifmap_global_system_config_alloc(self, parent_type, fq_name):
    def _ifmap_global_system_config_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_global_system_config_create(self, obj_ids, obj_dict):
    def _ifmap_global_system_config_read(self, ifmap_id, field_names = None):
    def _ifmap_global_system_config_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_global_system_config_update(self, ifmap_id, new_obj_dict):
    def _ifmap_global_system_config_list(self, parent_type, parent_fq_name):
    def _ifmap_global_system_config_delete(self, obj_ids):
    def _ifmap_loadbalancer_member_alloc(self, parent_type, fq_name):
    def _ifmap_loadbalancer_member_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_loadbalancer_member_create(self, obj_ids, obj_dict):
    def _ifmap_loadbalancer_member_read(self, ifmap_id, field_names = None):
    def _ifmap_loadbalancer_member_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_loadbalancer_member_update(self, ifmap_id, new_obj_dict):
    def _ifmap_loadbalancer_member_list(self, parent_type, parent_fq_name):
    def _ifmap_loadbalancer_member_delete(self, obj_ids):
    def _ifmap_service_instance_alloc(self, parent_type, fq_name):
    def _ifmap_service_instance_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_service_instance_create(self, obj_ids, obj_dict):
    def _ifmap_service_instance_read(self, ifmap_id, field_names = None):
    def _ifmap_service_instance_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_service_instance_update(self, ifmap_id, new_obj_dict):
    def _ifmap_service_instance_list(self, parent_type, parent_fq_name):
    def _ifmap_service_instance_delete(self, obj_ids):
    def _ifmap_namespace_alloc(self, parent_type, fq_name):
    def _ifmap_namespace_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_namespace_create(self, obj_ids, obj_dict):
    def _ifmap_namespace_read(self, ifmap_id, field_names = None):
    def _ifmap_namespace_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_namespace_update(self, ifmap_id, new_obj_dict):
    def _ifmap_namespace_list(self, parent_type, parent_fq_name):
    def _ifmap_namespace_delete(self, obj_ids):
    def _ifmap_route_table_alloc(self, parent_type, fq_name):
    def _ifmap_route_table_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_route_table_create(self, obj_ids, obj_dict):
    def _ifmap_route_table_read(self, ifmap_id, field_names = None):
    def _ifmap_route_table_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_route_table_update(self, ifmap_id, new_obj_dict):
    def _ifmap_route_table_list(self, parent_type, parent_fq_name):
    def _ifmap_route_table_delete(self, obj_ids):
    def _ifmap_physical_interface_alloc(self, parent_type, fq_name):
    def _ifmap_physical_interface_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_physical_interface_create(self, obj_ids, obj_dict):
    def _ifmap_physical_interface_read(self, ifmap_id, field_names = None):
    def _ifmap_physical_interface_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_physical_interface_update(self, ifmap_id, new_obj_dict):
    def _ifmap_physical_interface_list(self, parent_type, parent_fq_name):
    def _ifmap_physical_interface_delete(self, obj_ids):
    def _ifmap_access_control_list_alloc(self, parent_type, fq_name):
    def _ifmap_access_control_list_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_access_control_list_create(self, obj_ids, obj_dict):
    def _ifmap_access_control_list_read(self, ifmap_id, field_names = None):
    def _ifmap_access_control_list_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_access_control_list_update(self, ifmap_id, new_obj_dict):
    def _ifmap_access_control_list_list(self, parent_type, parent_fq_name):
    def _ifmap_access_control_list_delete(self, obj_ids):
    def _ifmap_virtual_DNS_alloc(self, parent_type, fq_name):
    def _ifmap_virtual_DNS_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_virtual_DNS_create(self, obj_ids, obj_dict):
    def _ifmap_virtual_DNS_read(self, ifmap_id, field_names = None):
    def _ifmap_virtual_DNS_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_virtual_DNS_update(self, ifmap_id, new_obj_dict):
    def _ifmap_virtual_DNS_list(self, parent_type, parent_fq_name):
    def _ifmap_virtual_DNS_delete(self, obj_ids):
    def _ifmap_customer_attachment_alloc(self, parent_type, fq_name):
    def _ifmap_customer_attachment_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_customer_attachment_create(self, obj_ids, obj_dict):
    def _ifmap_customer_attachment_read(self, ifmap_id, field_names = None):
    def _ifmap_customer_attachment_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_customer_attachment_update(self, ifmap_id, new_obj_dict):
    def _ifmap_customer_attachment_list(self, parent_type, parent_fq_name):
    def _ifmap_customer_attachment_delete(self, obj_ids):
    def _ifmap_loadbalancer_pool_alloc(self, parent_type, fq_name):
    def _ifmap_loadbalancer_pool_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_loadbalancer_pool_create(self, obj_ids, obj_dict):
    def _ifmap_loadbalancer_pool_read(self, ifmap_id, field_names = None):
    def _ifmap_loadbalancer_pool_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_loadbalancer_pool_update(self, ifmap_id, new_obj_dict):
    def _ifmap_loadbalancer_pool_list(self, parent_type, parent_fq_name):
    def _ifmap_loadbalancer_pool_delete(self, obj_ids):
    def _ifmap_virtual_machine_alloc(self, parent_type, fq_name):
    def _ifmap_virtual_machine_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_virtual_machine_create(self, obj_ids, obj_dict):
    def _ifmap_virtual_machine_read(self, ifmap_id, field_names = None):
    def _ifmap_virtual_machine_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_virtual_machine_update(self, ifmap_id, new_obj_dict):
    def _ifmap_virtual_machine_list(self, parent_type, parent_fq_name):
    def _ifmap_virtual_machine_delete(self, obj_ids):
    def _ifmap_interface_route_table_alloc(self, parent_type, fq_name):
    def _ifmap_interface_route_table_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_interface_route_table_create(self, obj_ids, obj_dict):
    def _ifmap_interface_route_table_read(self, ifmap_id, field_names = None):
    def _ifmap_interface_route_table_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_interface_route_table_update(self, ifmap_id, new_obj_dict):
    def _ifmap_interface_route_table_list(self, parent_type, parent_fq_name):
    def _ifmap_interface_route_table_delete(self, obj_ids):
    def _ifmap_service_template_alloc(self, parent_type, fq_name):
    def _ifmap_service_template_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_service_template_create(self, obj_ids, obj_dict):
    def _ifmap_service_template_read(self, ifmap_id, field_names = None):
    def _ifmap_service_template_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_service_template_update(self, ifmap_id, new_obj_dict):
    def _ifmap_service_template_list(self, parent_type, parent_fq_name):
    def _ifmap_service_template_delete(self, obj_ids):
    def _ifmap_virtual_ip_alloc(self, parent_type, fq_name):
    def _ifmap_virtual_ip_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_virtual_ip_create(self, obj_ids, obj_dict):
    def _ifmap_virtual_ip_read(self, ifmap_id, field_names = None):
    def _ifmap_virtual_ip_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_virtual_ip_update(self, ifmap_id, new_obj_dict):
    def _ifmap_virtual_ip_list(self, parent_type, parent_fq_name):
    def _ifmap_virtual_ip_delete(self, obj_ids):
    def _ifmap_security_group_alloc(self, parent_type, fq_name):
    def _ifmap_security_group_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_security_group_create(self, obj_ids, obj_dict):
    def _ifmap_security_group_read(self, ifmap_id, field_names = None):
    def _ifmap_security_group_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_security_group_update(self, ifmap_id, new_obj_dict):
    def _ifmap_security_group_list(self, parent_type, parent_fq_name):
    def _ifmap_security_group_delete(self, obj_ids):
    def _ifmap_provider_attachment_alloc(self, parent_type, fq_name):
    def _ifmap_provider_attachment_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_provider_attachment_create(self, obj_ids, obj_dict):
    def _ifmap_provider_attachment_read(self, ifmap_id, field_names = None):
    def _ifmap_provider_attachment_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_provider_attachment_update(self, ifmap_id, new_obj_dict):
    def _ifmap_provider_attachment_list(self, parent_type, parent_fq_name):
    def _ifmap_provider_attachment_delete(self, obj_ids):
    def _ifmap_network_ipam_alloc(self, parent_type, fq_name):
    def _ifmap_network_ipam_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_network_ipam_create(self, obj_ids, obj_dict):
    def _ifmap_network_ipam_read(self, ifmap_id, field_names = None):
    def _ifmap_network_ipam_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_network_ipam_update(self, ifmap_id, new_obj_dict):
    def _ifmap_network_ipam_list(self, parent_type, parent_fq_name):
    def _ifmap_network_ipam_delete(self, obj_ids):
    def _ifmap_loadbalancer_healthmonitor_alloc(self, parent_type, fq_name):
    def _ifmap_loadbalancer_healthmonitor_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_loadbalancer_healthmonitor_create(self, obj_ids, obj_dict):
    def _ifmap_loadbalancer_healthmonitor_read(self, ifmap_id, field_names = None):
    def _ifmap_loadbalancer_healthmonitor_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_loadbalancer_healthmonitor_update(self, ifmap_id, new_obj_dict):
    def _ifmap_loadbalancer_healthmonitor_list(self, parent_type, parent_fq_name):
    def _ifmap_loadbalancer_healthmonitor_delete(self, obj_ids):
    def _ifmap_virtual_network_alloc(self, parent_type, fq_name):
    def _ifmap_virtual_network_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_virtual_network_create(self, obj_ids, obj_dict):
    def _ifmap_virtual_network_read(self, ifmap_id, field_names = None):
    def _ifmap_virtual_network_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_virtual_network_update(self, ifmap_id, new_obj_dict):
    def _ifmap_virtual_network_list(self, parent_type, parent_fq_name):
    def _ifmap_virtual_network_delete(self, obj_ids):
    def _ifmap_project_alloc(self, parent_type, fq_name):
    def _ifmap_project_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_project_create(self, obj_ids, obj_dict):
    def _ifmap_project_read(self, ifmap_id, field_names = None):
    def _ifmap_project_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_project_update(self, ifmap_id, new_obj_dict):
    def _ifmap_project_list(self, parent_type, parent_fq_name):
    def _ifmap_project_delete(self, obj_ids):
    def _ifmap_logical_interface_alloc(self, parent_type, fq_name):
    def _ifmap_logical_interface_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_logical_interface_create(self, obj_ids, obj_dict):
    def _ifmap_logical_interface_read(self, ifmap_id, field_names = None):
    def _ifmap_logical_interface_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_logical_interface_update(self, ifmap_id, new_obj_dict):
    def _ifmap_logical_interface_list(self, parent_type, parent_fq_name):
    def _ifmap_logical_interface_delete(self, obj_ids):
    def _ifmap_routing_instance_alloc(self, parent_type, fq_name):
    def _ifmap_routing_instance_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_routing_instance_create(self, obj_ids, obj_dict):
    def _ifmap_routing_instance_read(self, ifmap_id, field_names = None):
    def _ifmap_routing_instance_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_routing_instance_update(self, ifmap_id, new_obj_dict):
    def _ifmap_routing_instance_list(self, parent_type, parent_fq_name):
    def _ifmap_routing_instance_delete(self, obj_ids):
    def _ifmap_virtual_machine_interface_alloc(self, parent_type, fq_name):
    def _ifmap_virtual_machine_interface_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_virtual_machine_interface_create(self, obj_ids, obj_dict):
    def _ifmap_virtual_machine_interface_read(self, ifmap_id, field_names = None):
    def _ifmap_virtual_machine_interface_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_virtual_machine_interface_update(self, ifmap_id, new_obj_dict):
    def _ifmap_virtual_machine_interface_list(self, parent_type, parent_fq_name):
    def _ifmap_virtual_machine_interface_delete(self, obj_ids):
    def _ifmap_logical_router_alloc(self, parent_type, fq_name):
    def _ifmap_logical_router_set(self, my_imid, existing_metas, obj_dict):
    def _ifmap_logical_router_create(self, obj_ids, obj_dict):
    def _ifmap_logical_router_read(self, ifmap_id, field_names = None):
    def _ifmap_logical_router_read_to_meta_index(self, ifmap_id, field_names = None):
    def _ifmap_logical_router_update(self, ifmap_id, new_obj_dict):
    def _ifmap_logical_router_list(self, parent_type, parent_fq_name):
    def _ifmap_logical_router_delete(self, obj_ids):
    def domain_alloc_ifmap_id(self, parent_type, fq_name):
    def global_vrouter_config_alloc_ifmap_id(self, parent_type, fq_name):
    def instance_ip_alloc_ifmap_id(self, parent_type, fq_name):
    def network_policy_alloc_ifmap_id(self, parent_type, fq_name):
    def virtual_DNS_record_alloc_ifmap_id(self, parent_type, fq_name):
    def route_target_alloc_ifmap_id(self, parent_type, fq_name):
    def floating_ip_alloc_ifmap_id(self, parent_type, fq_name):
    def floating_ip_pool_alloc_ifmap_id(self, parent_type, fq_name):
    def physical_router_alloc_ifmap_id(self, parent_type, fq_name):
    def bgp_router_alloc_ifmap_id(self, parent_type, fq_name):
    def virtual_router_alloc_ifmap_id(self, parent_type, fq_name):
    def config_root_alloc_ifmap_id(self, parent_type, fq_name):
    def subnet_alloc_ifmap_id(self, parent_type, fq_name):
    def global_system_config_alloc_ifmap_id(self, parent_type, fq_name):
    def loadbalancer_member_alloc_ifmap_id(self, parent_type, fq_name):
    def service_instance_alloc_ifmap_id(self, parent_type, fq_name):
    def namespace_alloc_ifmap_id(self, parent_type, fq_name):
    def route_table_alloc_ifmap_id(self, parent_type, fq_name):
    def physical_interface_alloc_ifmap_id(self, parent_type, fq_name):
    def access_control_list_alloc_ifmap_id(self, parent_type, fq_name):
    def virtual_DNS_alloc_ifmap_id(self, parent_type, fq_name):
    def customer_attachment_alloc_ifmap_id(self, parent_type, fq_name):
    def loadbalancer_pool_alloc_ifmap_id(self, parent_type, fq_name):
    def virtual_machine_alloc_ifmap_id(self, parent_type, fq_name):
    def interface_route_table_alloc_ifmap_id(self, parent_type, fq_name):
    def service_template_alloc_ifmap_id(self, parent_type, fq_name):
    def virtual_ip_alloc_ifmap_id(self, parent_type, fq_name):
    def security_group_alloc_ifmap_id(self, parent_type, fq_name):
    def provider_attachment_alloc_ifmap_id(self, parent_type, fq_name):
    def network_ipam_alloc_ifmap_id(self, parent_type, fq_name):
    def loadbalancer_healthmonitor_alloc_ifmap_id(self, parent_type, fq_name):
    def virtual_network_alloc_ifmap_id(self, parent_type, fq_name):
    def project_alloc_ifmap_id(self, parent_type, fq_name):
    def logical_interface_alloc_ifmap_id(self, parent_type, fq_name):
    def routing_instance_alloc_ifmap_id(self, parent_type, fq_name):
    def virtual_machine_interface_alloc_ifmap_id(self, parent_type, fq_name):
    def logical_router_alloc_ifmap_id(self, parent_type, fq_name):
saju@ubuntu:/usr/lib/python2.7/dist-packages$