Custom Search

Friday, June 26, 2015

How To Fix Celery ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 104] Connection reset by peer

How To Fix Celery ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 104] Connection reset by peer

1)
$ sudo rabbitmqctl add_user myuser mypassword
Creating user "myuser" ...
...done.

$ sudo rabbitmqctl add_vhost myvhost
Creating vhost "myvhost" ...
...done.

$ sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
Setting permissions for user "myuser" in vhost "myvhost" ...
...done.

2)
$vim tasks.py

from celery import Celery

app = Celery('tasks', broker='amqp://myuser:mypassword@localhost/myvhost')

@app.task
def add(x, y):
    return x + y

3)
$ celery -A tasks worker --loglevel=info
[2015-06-26 23:49:25,809: WARNING/MainProcess] /home/saju/thrineshwara/myenv/local/lib/python2.7/site-packages/celery/apps/worker.py:161: CDeprecationWarning:
Starting from version 3.2 Celery will refuse to accept pickle by default.

The pickle serializer is a security concern as it may give attackers
the ability to execute any command.  It's important to secure
your broker from unauthorized access when using pickle, so we think
that enabling pickle should require a deliberate action and not be
the default choice.

If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::

    CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']

You must only enable the serializers that you will actually use.


  warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))

 -------------- celery@saju-Inspiron-5521 v3.1.18 (Cipater)
---- **** -----
--- * ***  * -- Linux-3.11.0-26-generic-x86_64-with-Ubuntu-14.04-trusty
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         tasks:0x7f6e3f1e71d0
- ** ---------- .> transport:   amqp://myuser:**@localhost:5672/myvhost
- ** ---------- .> results:     disabled
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
 -------------- .> celery           exchange=celery(direct) key=celery
               

[tasks]
  . tasks.add

[2015-06-26 23:49:25,824: INFO/MainProcess] Connected to amqp://myuser:**@127.0.0.1:5672/myvhost
[2015-06-26 23:49:25,890: INFO/MainProcess] mingle: searching for neighbors
[2015-06-26 23:49:26,914: INFO/MainProcess] mingle: all alone
[2015-06-26 23:49:27,088: WARNING/MainProcess] celery@saju-Inspiron-5521 ready.


Thursday, June 25, 2015

How to juniper vSRX add new License Key

root@% cli
root> show system license

1)
My license

https://download.juniper.net/cust-svc/srx/E419777401.txt
E419777401 aeaqic apaeor 4altdy arwhqb impacr i6bmed
           embrgu ydgmbz bqihmu 2slawu u5lonf ygk4sf
           ozqwyb ziukrz o4t4tq 73ypay 2pgysd icl7im
           u5x4l3 4pgvmf cggson fslbu7 atr27n sh6zqe
           s2rq
          

root> request system license add my-license-file.txt

root> show system license
       









Installing Juniper Firefly Perimeter (vSRX) in VirtualBox

1)
Download junos-vsrx

junos-vsrx-12.1X47-D20.7-domestic.ova

http://www.juniper.net/us/en/products-services/security/srx-series/vsrx/#sw

2)
Untar

#tar -xvf junos-vsrx-12.1X47-D20.7-domestic.ova

3)
Convert to vdi

#vboxmanage clonehd -format VDI junos-vsrx-12.1X47-D20.7-domestic-disk1.vmdk junos-vsrx-12.1X47-D20.7-domestic-disk1-1.vdi

4)
Create a Virtualbox VM with vdi disk

5)
Login as root, no password

Tuesday, June 23, 2015

How to tweak OpenContrail supervisor

1)
Check all services run by supervisor
 
#ps -aux | grep /usr/bin/supervisord

root      1530  0.0  0.1  60712  5432 ?        S    13:44   0:01 /usr/bin/python /usr/bin/supervisord --nodaemon -c /etc/contrail/supervisord_analytics.conf
root      1532  0.0  0.1  60692  5312 ?        S    13:44   0:01 /usr/bin/python /usr/bin/supervisord --nodaemon -c /etc/contrail/supervisord_database.conf
root      1533  0.0  0.1  60924  7968 ?        S    13:44   0:01 /usr/bin/python /usr/bin/supervisord --nodaemon -c /etc/contrail/supervisord_support_service.conf
root      1535  0.0  0.1  60676  5088 ?        S    13:44   0:00 /usr/bin/python /usr/bin/supervisord --nodaemon -c /etc/contrail/supervisord_webui.conf
root      1536  0.0  0.1  60700  5268 ?        S    13:44   0:01 /usr/bin/python /usr/bin/supervisord --nodaemon -c /etc/contrail/supervisord_control.conf
root      1555  0.0  0.1  60684  5196 ?        S    13:45   0:00 /usr/bin/python /usr/bin/supervisord --nodaemon -c /etc/contrail/supervisord_vrouter.conf
root      1557  0.2  0.1  61416  7948 ?        S    13:45   0:04 /usr/bin/python /usr/bin/supervisord --nodaemon -c /etc/contrail/supervisord_openstack.conf
root      4287  0.0  0.2  60636  9556 ?        Ss   13:45   0:00 /usr/bin/python /usr/bin/supervisord -c /etc/supervisor/supervisord.conf
saju      5874  0.0  0.0  11748   928 pts/2    S+   14:15   0:00 grep --color=auto /usr/bin/supervisord
root      6365  0.1  0.3  60720 14260 ?        S    13:45   0:01 /usr/bin/python /usr/bin/supervisord --nodaemon -c /etc/contrail/supervisord_config.conf

2)
main contrail supervisord and its config

#sudo vim /etc/contrail/supervisord_config.conf
[include]
files = /etc/contrail/supervisord_config_files/*.ini

3)

Check the files in the "/etc/contrail/supervisord_config_files" directory. All configurations used by supervisor for "contrail config node" can find here.

#sudo ls /etc/contrail/supervisord_config_files
contrail-api.ini       contrail-device-manager.ini  contrail-nodemgr-config.ini  contrail-svc-monitor.ini
contrail-config.rules  contrail-discovery.ini        contrail-schema.ini         ifmap.ini

4)
Check the command used by supervisor to run "contrail-api" server.

#sudo vim /etc/contrail/supervisord_config_files/contrail-api.ini
[program:contrail-api]
command=/usr/bin/contrail-api --conf_file /etc/contrail/contrail-api.conf --conf_file /etc/contrail/contrail-keystone-auth.conf --listen_port 910%(process_num)01d --worker_id %(process_num)s
numprocs=1
process_name=%(process_num)s
redirect_stderr=true
stdout_logfile= /var/log/contrail/contrail-api-%(process_num)s-stdout.log
stderr_logfile=/dev/null
priority=440
autostart=true
killasgroup=true
stopsignal=KILL
exitcodes=0

5)

Check the rules used by supervisor for "contrail-config".

#sudo vim /etc/contrail/supervisord_config_files/contrail-config.rules
{ "Rules": [
    {"processname": "contrail-api", "process_state": "PROCESS_STATE_STOPPED", "action": "sudo service supervisor-config restart"},
    {"processname": "contrail-api", "process_state": "PROCESS_STATE_EXITED", "action": "sudo service supervisor-config restart"},
    {"processname": "contrail-api", "process_state": "PROCESS_STATE_FATAL", "action": "sudo service supervisor-config restart"},

    {"processname": "ifmap", "process_state": "PROCESS_STATE_STOPPED", "action": "sudo service supervisor-config restart"},
    {"processname": "ifmap", "process_state": "PROCESS_STATE_EXITED", "action": "sudo service supervisor-config restart"},
    {"processname": "ifmap", "process_state": "PROCESS_STATE_FATAL", "action": "sudo service supervisor-config restart"}
     ]
}

6)

Find the command which used to run "contrail-api" server and copy it.

#ps -aux | grep contrail-api

root      6493  0.8  1.3 324804 61400 ?        Sl   13:46   0:17 /usr/bin/python /usr/bin/contrail-api --conf_file /etc/contrail/contrail-api.conf --conf_file /etc/contrail/contrail-keystone-auth.conf --listen_port 9100 --worker_id 0

7)
Try neutron command "neutron net-list", this should work since we haven't made any change.

export OS_USERNAME=admin
export OS_PASSWORD=secret123
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://127.0.0.1:35357/v2.0

#neutron net-list

8)
Rename "contrail-api.ini" to "contrail-api.ini_bkp"

#sudo mv /etc/contrail/supervisord_config_files/contrail-api.ini /etc/contrail/supervisord_config_files/contrail-api.ini_bkp

9)
Comment first 3 lines which contains "processname": "contrail-api"

#sudo vim /etc/contrail/supervisord_config_files/contrail-config.rules
{ "Rules": [
#    {"processname": "contrail-api", "process_state": "PROCESS_STATE_STOPPED", "action": "sudo service supervisor-config restart"},
#    {"processname": "contrail-api", "process_state": "PROCESS_STATE_EXITED", "action": "sudo service supervisor-config restart"},
#    {"processname": "contrail-api", "process_state": "PROCESS_STATE_FATAL", "action": "sudo service supervisor-config restart"},

    {"processname": "ifmap", "process_state": "PROCESS_STATE_STOPPED", "action": "sudo service supervisor-config restart"},
    {"processname": "ifmap", "process_state": "PROCESS_STATE_EXITED", "action": "sudo service supervisor-config restart"},
    {"processname": "ifmap", "process_state": "PROCESS_STATE_FATAL", "action": "sudo service supervisor-config restart"}
     ]
}

10)
Restart supervisor-config

#sudo service supervisor-config restart
supervisor-config stop/waiting
supervisor-config start/running, process 21955

11)
Ensure that "contrail-api" server not running now

#ps -aux | grep contrail-api
saju     22333  0.0  0.0  11744   924 pts/2    S+   14:33   0:00 grep --color=auto contrail-api

12)
Try neutron command "neutron net-list", this would not work if "contrail-api" server is not running.

#neutron net-list
500-{u'NeutronError': {u'message': u'An unknown exception occurred.', u'type': u'NeutronException', u'detail': u''}}

13)
Manually Start "contrail-api" server

#sudo /usr/bin/python /usr/bin/contrail-api --conf_file /etc/contrail/contrail-api.conf --conf_file /etc/contrail/contrail-keystone-auth.conf --listen_port 9100 --worker_id 0

14)
Try neutron command "neutron net-list", this should work.

#neutron net-list
Works :)


Thursday, June 18, 2015

Git How to search in diff and commit message and find commit id

1)
Dump all commits of file src/config/schema-transformer/to_bgp.py

#git log -p src/config/schema-transformer/to_bgp.py > output.txt

2)
Open the dump filed output.txt and search.
#vim output.txt




Monday, June 15, 2015

Contrail add Static Routes via Host Routes notes


http://fosshelp.blogspot.in/2015/06/contrail-add-static-routes-via-host.html

1)
Port Create Flow.

This check "host routes" of the subnet, then create InterfaceRouteTable and attach to the port.

a)
Port create operation create "interface route table" for the port if subnet has a "host route" for this port's IP and "apply_subnet_host_routes=True" in /etc/contrail/api_server.conf .

https://github.com/Juniper/contrail-controller/blob/master/src/config/vnc_openstack/vnc_openstack/neutron_plugin_db.py#L3454
def port_create(self, context, port_q):
        if self._apply_subnet_host_routes:
            self._port_check_and_add_iface_route_table(ret_port_q['fixed_ips'],
                                                       net_obj, port_obj)

b)

https://github.com/Juniper/contrail-controller/blob/master/src/config/vnc_openstack/vnc_openstack/neutron_plugin_db.py#L2089

def _port_check_and_add_iface_route_table(self, fixed_ips, net_obj,port_obj):

* Get ipam of the network.
* Get subnets of the network from the ipam.
* Get host routes of each subnet
* If "IP" address of this port is defined as next-hop in "host routes" of any one of the subnet on this network, then invoke "_port_add_iface_route_table" and pass CIDR defined in "host routes" for this IP (next-hop) with port and subnet object.

c)
https://github.com/Juniper/contrail-controller/blob/master/src/config/vnc_openstack/vnc_openstack/neutron_plugin_db.py#L2114

* Create InterfaceRouteTable and set routes.
* Add created InterfaceRouteTable to port.

from vnc_api.gen.resource_client import InterfaceRouteTable

def _port_add_iface_route_table(self, route_prefix_list, port_obj, subnet_id):

    project_obj = self._project_read(proj_id=port_obj.parent_uuid)

    #Create RouteTableType object and set route as empty.
    route_table = RouteTableType(intf_rt_name)
    route_table.set_route([])
   
    #Create InterfaceRouteTable object and set interface_route_table_routes
    intf_route_table = InterfaceRouteTable(
                        interface_route_table_routes=route_table,
                        parent_obj=project_obj,
                        name=intf_rt_name)

    #Create InterfaceRouteTable by making vnc api call
    intf_route_table_id = self._vnc_lib.interface_route_table_create(
                            intf_route_table)
    intf_route_table_obj = self._vnc_lib.interface_route_table_read(
                            id=intf_route_table_id)

    #Get routes from InterfaceRouteTable object.
    rt_routes = intf_route_table_obj.get_interface_route_table_routes()
    routes = rt_routes.get_route()
    # delete any old routes
    routes = []
   
    #Set routes to InterfaceRouteTable
    for prefix in route_prefix_list:
        routes.append(RouteType(prefix=prefix))
    rt_routes.set_route(routes)
    intf_route_table_obj.set_interface_route_table_routes(rt_routes)
    self._vnc_lib.interface_route_table_update(intf_route_table_obj)
   
    #Add InterfaceRouteTable to port
    port_obj.add_interface_route_table(intf_route_table_obj)
    self._vnc_lib.virtual_machine_interface_update(port_obj)

d)
d1)

https://github.com/Juniper/contrail-controller/blob/26f014b4f6e55afbb09d86551e9efea5d9efe30e/src/schema/vnc_cfg.xsd#L1409
contrail-controller/src/schema/vnc_cfg.xsd

<xsd:complexType name="RouteTableType">
    <xsd:all>
        <xsd:element name="route" type="RouteType" maxOccurs="unbounded"/>
    </xsd:all>
</xsd:complexType>

d2)
https://github.com/Juniper/contrail-controller/blob/26f014b4f6e55afbb09d86551e9efea5d9efe30e/src/schema/vnc_cfg.xsd#L1437
contrail-controller/src/schema/vnc_cfg.xsd


 <xsd:element name="interface-route-table" type="ifmap:IdentityType"/>

<xsd:element name="interface-route-table-routes" type="RouteTableType"/>


2)
Examples:

a)
Static route util example, using same logic.
https://github.com/Juniper/contrail-controller/blob/master/src/config/utils/provision_static_route.py

b)
Another static route example from svc-monitor.

https://github.com/Juniper/contrail-controller/blob/332bc0a51558439eec4e19b56380d9fc15c4aff7/src/config/svc-monitor/svc_monitor/instance_manager.py#L109

3)
Some Findings


---------------

* Example of one host route entry which i added to a subnet "10.1.10/24" of network "mynetwork1" is "12.1.1.0/24, 10.1.1.3".
* Here 10.1.1.3 is the IP of a VM which belongs to same subnet where we added host route "12.1.1.0/24, 10.1.1.3".
* Add routes in the "subnet:Host Routes" to VRF which mapped to the Network.
* Next-hop should be IP of a VM
* Route get delete from VRF when we shoutdown the next-hop VM.
* Route get add again to VRF when we start the next-hop VM from controller http://ip:8083/Snh_IFMapTableShowReq?x=interface-route-table
* Other VMs in the same subnet can redirect trafic destinct to "12.1.1.0/24" to VM "10.1.1.3".

---------------

* mynetwork1 --> subnet1:(10.1.1.0/24), subnet2:(10.5.5.0/24)

* Delete "host route" from subnet (Edit sudnet), this will remove route from controller "http://ip:8083/Snh_IFMapTableShowReq?x=interface-route-table" and VRF which mapped to the Network. After that, if we try to ping to next-hot VM:10.1.1.3 from a VM:10.5.5.4 which belongs to
different subnet in the same network, ping not works, we need to restart net-hop VM:10.1.1.3 to make ping working.

* Similarly, if we add "host route" back to the subnet, that will add route to controller "http://ip:8083/Snh_IFMapTableShowReq?x=interface-route-table" and VRF which mapped to the Network. This will not prevent the ping from a VM:10.5.5.4 which belongs to different subnet in the same network to VM:10.1.1.3. we need to restart net-hop VM:10.1.1.3 to prevent ping.

---------------

4)
Update subnet flow.


* Get all IP objects of the network where this subnet belongs.
* If an IP match with next-hop defined in the subnet's "host routes", then find the port assocuated with the IP, then create InterfaceRouteTable and attach to the port.

a)
https://github.com/Juniper/contrail-controller/blob/26f014b4f6e55afbb09d86551e9efea5d9efe30e/src/config/vnc_openstack/vnc_openstack/neutron_plugin_db.py#L2582
 

contrail-controller/src/config/vnc_openstack/vnc_openstack/neutron_plugin_db.py

def subnet_update(self, subnet_id, subnet_q):
    if self._apply_subnet_host_routes:
        old_host_routes = subnet_vnc.get_host_routes()
        subnet_cidr = '%s/%s' % (subnet_vnc.subnet.get_ip_prefix(),
                                 subnet_vnc.subnet.get_ip_prefix_len())
        self._port_update_iface_route_table(net_obj,
                                            subnet_cidr,
                                            subnet_id,
                                            host_routes,
                                            old_host_routes)


b)
https://github.com/Juniper/contrail-controller/blob/26f014b4f6e55afbb09d86551e9efea5d9efe30e/src/config/vnc_openstack/vnc_openstack/neutron_plugin_db.py#L2150
 

contrail-controller/src/config/vnc_openstack/vnc_openstack/neutron_plugin_db.py

def _port_update_iface_route_table(self, net_obj, subnet_cidr, subnet_id,
                                   new_host_routes, old_host_routes=None):

                                  
    # get the list of all the ip objs for this network
    ipobjs = self._instance_ip_list(back_ref_id=[net_obj.uuid])
    for ipobj in ipobjs:
        ipaddr = ipobj.get_instance_ip_address()
        if ipaddr in old_host_prefixes:
            self._port_remove_iface_route_table(ipobj, subnet_id)
            continue

        if ipaddr in new_host_prefixes:
            port_back_refs = ipobj.get_virtual_machine_interface_refs()
            for port_ref in port_back_refs:
                port_obj = self._virtual_machine_interface_read(
                                port_id=port_ref['uuid'])
                self._port_add_iface_route_table(new_host_prefixes[ipaddr],
                                                 port_obj, subnet_id)


5)
a)

#sudo grep -r apply_subnet_host_routes /etc/contrail

b)
#sudo grep -r apply_subnet_host_routes /usr/lib/python2.7/dist-packages/vnc_openstack

c)
IFMap Node table

http://192.168.56.102:8083/Snh_IFMapNodeTableListShowReq

Route table
http://192.168.56.102:8083/Snh_IFMapTableShowReq?x=route-table

Interface Route table
http://192.168.56.102:8083/Snh_IFMapTableShowReq?x=interface-route-table

VRF List
http://192.168.56.102:8085/Snh_VrfListReq?name=

Static Route
https://github.com/Juniper/contrail-controller/wiki/Static-Routes

Patch
https://github.com/Juniper/contrail-controller/commit/5031f59adcca7e238c1489fde2558521e2c2d81a#diff-1e8d43a6c800def7704681bd7b7827bd



Thursday, June 11, 2015

Contrail add Static Routes via Host Routes

OpenContrail add Static Routes via Host Routes

1)
Static Routes

https://github.com/Juniper/contrail-controller/wiki/Static-Routes

* We can add static routes via contrail UI and Util script.
* Util script: https://github.com/Juniper/contrail-controller/blob/master/src/config/utils/provision_static_route.py
* Usinf this we can configure in a virtual-network with subnet 10.0.0.0/24, all the traffic originating from a VM in virtual-network, destined to subnet 11.1.1.0/24 can be configured to go via a service appliance (VM), using static route configured on a service VM interface.

* We can also add static route via host routes, for that we have to set "apply_subnet_host_routes=True" in  /etc/contrail/contrail-api.conf

2)
How to add static route via host routes ?

Site to Site VPN in OpenContrail:
https://github.com/numansiddique/snat_test/wiki/Site-to-Site-VPN-in-OpenContrail

3)
Patch which enable us to add static route via host routes.


https://review.opencontrail.org/#/c/1462/

https://github.com/Juniper/contrail-controller/commit/5031f59adcca7e238c1489fde2558521e2c2d81a#diff-1e8d43a6c800def7704681bd7b7827bd

4)
https://www.youtube.com/watch?v=PKJWFsyzBGw
http://fosshelp.blogspot.in/2015/06/contrail-add-static-routes-via-host_15.html



Wednesday, June 10, 2015

OpenStack Difference between static routes and host routes

1)
Host routes:
Host routes added to a subnet get pushed to the instances in that subnet via dhcp.


http://xuhanp.tumblr.com/post/107088879052/openstack-neutron-subnet-extra-routes-usage


https://github.com/numansiddique/snat_test/wiki/Site-to-Site-VPN-in-OpenContrail


2)
Static routes:
A route added to a router via router-create or router-update is added to the routing table within the qrouter namespace and affects all connected subnets/instances.

How to contrail stop and start supervisor-webui and supervisor-analytics services

How to opencontrail stop and start supervisor-webui and supervisor-analytics services

a)
List all supervisord services

#ps -aux | grep /usr/bin/supervisord

b)
Stop supervisor-webui service

#sudo service supervisor-webui status
#sudo service supervisor-webui stop


c)
Stop supervisor-analytics service

#sudo service supervisor-analytics status
#sudo service supervisor-analytics stop


https://github.com/Juniper/contrail-fabric-utils/blob/master/fabfile/tasks/services.py


Tuesday, June 9, 2015

How to contrail sync keystone domain and project while starting api_server

How to opencontrail sync keystone domain and project while starting api_server
1)
Script to Start/Stop Contrail API Server
#python /usr/local/lib/python2.7/dist-packages/vnc_cfg_api_server/vnc_cfg_api_server.py --conf_file /etc/contrail/contrail-api.conf --reset_config --rabbit_user guest --rabbit_password contrail123

2)
https://github.com/Juniper/contrail-controller/blob/master/src/config/api-server/vnc_cfg_api_server.py#L1038

/usr/local/lib/python2.7/dist-packages/vnc_cfg_api_server/vnc_cfg_api_server.py

contrail-controller/src/config/api-server/vnc_cfg_api_server.py

class VncApiServer(VncApiServerGen):

    def _db_init_entries(self):
        try:
            self._extension_mgrs['resync'].map(self._resync_domains_projects) <==1
        except Exception as e:
            pass

    def _resync_domains_projects(self, ext): <==2
        ext.obj.resync_domains_projects() <==3

3)
https://github.com/Juniper/contrail-controller/blob/master/src/config/vnc_openstack/vnc_openstack/__init__.py#L591
contrail-controller/src/config/vnc_openstack/vnc_openstack/__init__.py


class OpenstackDriver(vnc_plugin_base.Resync):

    def resync_domains_projects(self): <==4
        # add asynchronously
        self._main_glet = gevent.spawn(self._resync_domains_projects_forever)<==5
        self._worker_glets = []
        for x in range(self._resync_number_workers):
            self._worker_glets.append(gevent.spawn(self._resync_worker)) <==11

    def _resync_domains_projects_forever(self):<==6
         while True:
            retry = self._resync_all_domains() <==7
            retry = self._resync_all_projects()  <==8

    def _resync_all_projects(self):<==9
            ks_project_ids = set(
                [str(uuid.UUID(proj['id']))
                    for proj in self._ks_projects_list()])

            for ks_project_id in ks_project_ids - vnc_project_ids:
                self.q.put((Q_CREATE, 'project', ks_project_id))<==10

    def _resync_worker(self): <==12
        while True:
            oper, obj_type, obj_id = self.q.get()<==13
            try:
                if oper == Q_DELETE:
                    if obj_type == 'domain':
                        self._del_domain_from_vnc(obj_id)
                    elif obj_type == 'project':
                        self._del_project_from_vnc(obj_id)
                    else:
                        raise KeyError("An invalid obj_type was specified: %s",
                                        obj_type)
                elif oper == Q_CREATE:<==14
                    if obj_type == 'domain':
                        self._add_domain_to_vnc(obj_id)
                    elif obj_type == 'project':<==15
                        self._add_project_to_vnc(obj_id)<==16
                    else:
                        raise KeyError("An invalid obj_type was specified: %s",
                                        obj_type)
                else:
                    raise KeyError("An invalid operation was specified: %s", oper)
            except (ValueError, KeyError):
                # For an unpack error or and invalid kind.
                self.log_exception()
            finally:
                self.q.task_done()

4)
Notes
>>> import Queue
>>> q = Queue.Queue(maxsize=1000)
>>> q.put((1,2,3))
>>> q.put((2,2,3))
>>> q.put((3,2,3))
>>> q.put((4,2,3))
>>>
>>> q.get()
(1, 2, 3)
>>> q.get()
(2, 2, 3)
>>> q.get()
(3, 2, 3)
>>> q.get()
(4, 2, 3)
>>>
>>> q.queue



How to debug neutron command flow in contrail vnc_openstack

How to debug neutron command opencontrail vnc_openstack

1)
Add breakpoint or print statement
#sudo vim /usr/local/lib/python2.7/dist-packages/vnc_openstack/neutron_plugin_interface.py

def plugin_http_post_network(self):

    print "=======", context['operation']
    import pdb; pdb.set_trace()



2)
Restart API Server

#python /usr/local/lib/python2.7/dist-packages/vnc_cfg_api_server/vnc_cfg_api_server.py --conf_file /etc/contrail/contrail-api.conf --reset_config --rabbit_user guest --rabbit_password contrail123 & echo $! >/home/dev-net/contrail-installer/status/contrail/apiSrv.pid; fg || echo "apiSrv failed to start" | tee "/home/dev-net/contrail-installer/status/contrail/apiSrv.failure"

3)
Fire neutron commands

#neutron net-list
#neutron net-create nw1





Convert XML to XSD and generate Python Data Structure

1)
Install generateDS

a)
Create a virtualenv and activate it.

b)
#wget https://pypi.python.org/packages/source/g/generateDS/generateDS-2.16a0.tar.gz#md5=bc110d5987da661274c2f2532e673488
#tar -xzf generateDS-2.16a0.tar.gz
#python setup.py install


c)

#pip install lxml

2)
Convert XML file to XSD file


You can use following site to Convert XML file "myfile.xml" to XSD file and save it as "convertedfile.xsd".
http://xmlgrid.net/xml2xsd.html

3)
Create Python Data Structure from "convertedfile.xsd" and save it as "pydatastruct.py".

#generateDS.py -f -o pydatastruct.py convertedfile.xsd

4)
Print all classes defined in the "pydatastruct.py"

#grep "^class " pydatastruct.py

5)
Parse "myfile.xml" and build element using "pydatastruct.py"

import pydatastruct
from lxml import etree

#parse "convertedfile.xsd" and find the element whose tag match "{lcn-lcn_ctrl_d}flavours"
a = pydatastruct.parse("convertedfile.xsd")
tree = etree.parse("a.xml")
for x in tree.getroot().getchildren()[0].getchildren():
    if x.tag == '{lcn-lcn_ctrl_d}flavours':
        flavour_el = x

#Build the element "flavour" with all its attributes
flavour_el_obj = pydatastruct.flavoursType()
flavour_el_build_obj = flavour_el_obj.build(flavour_el)
dir(flavour_el_build_obj)

#print the value of element "flavour"
flavour_el_build_obj.get_flavour_id()
flavour_el_build_obj.vdus.get_memory().get_total_memory_gb()

6)
http://www.davekuhlman.org/generateDS.html#building-instances

7)
http://lxml.de/tutorial.html

http://www.davekuhlman.org/generateds_tutorial.html <===== XSD to python class
http://www.xml.com/pub/a/2003/06/11/py-xml.html <===

http://xmlgrid.net/xml2xsd.html <=== XML to XSD

http://www.davekuhlman.org/generateDS.html#how-to-build-and-install-it

http://infohost.nmt.edu/tcc/help/pubs/pylxml/web/etree-Element.html
http://infohost.nmt.edu/tcc/help/pubs/pylxml/web/etree-parse.html

8)
Example codes:

a)
from xml.etree.ElementTree import iterparse
depth = 0
for (event, node) in iterparse('myfile.xml', ['start', 'end', 'start-ns', 'end-ns']):
    if event == 'end':
        depth -= 1
    if not isinstance(node, tuple):
        if node:  
            print "." * depth*2, (event, node.tag)
    if event == 'start':
        depth += 1

b)
from lxml import etree
tree = etree.parse("myfile.xml")
dir(tree)
root = tree.getroot()
dir(root)
children = root.getchildren()

for e in root.getchildren():
    (e.tag, e.text, e.attrib)

etree.tostring(tree)
etree.tostring(root)
etree.tostring(child)

root.tag.title()
root.attrib

Thursday, June 4, 2015

How to update GitHub forked repository

1)
Clone forked repo
#git clone https://github.com/sajuptpm/puppet-rjil.git
#cd puppet-rjil


2)
Check all existing branches
#git branch
  add-sajuptpm-key
  contrail_quota
* master

3)
Add official remote (official repo) and name it "official_remote"
#git remote add official_remote https://github.com/JioCloud/puppet-rjil.git

4)
Check all remotes
#git remote -v
official_remote    https://github.com/JioCloud/puppet-rjil.git (fetch)
official_remote    https://github.com/JioCloud/puppet-rjil.git (push)
origin    https://github.com/sajuptpm/puppet-rjil.git (fetch)
origin    https://github.com/sajuptpm/puppet-rjil.git (push)

5)
Fetch the latest version of master from official remote "official_remote"
#git fetch official_remote

6)
Update the master of forked repo (takes new commits from master of official repo and put it in master of forked repo)
#git rebase official_remote/master

7)
Push changes to forked repo in the github
#git push origin master

8)
Open your github account and check the forked repo



Wednesday, June 3, 2015

How to test puppet code : Test Spec error and Compilation error

1)
#sudo apt-get install ruby
#sudo apt-get install ruby-dev


2)
http://bundler.io/
#sudo gem install bundler

3)
Clone puppet project from github and goto cloned folder and run.


#bundle install
OR
#bundle install --gemfile=Gemfile


* This will install all the dependencies defined in the "Gemfile"
* Use `bundle show [gemname]` to see where a bundled gem is installed.
* https://puppetlabs.com/blog/the-next-generation-of-puppet-module-testing

4)
Run spec tests in a clean fixtures directory

#rake spec
OR
#bundle exec rake spec

* Run "#rake" to get help page

5)
Check that your Puppet manifest conform to the style guide
#bundle exec rake lint
#bundle exec rake lint | grep ERROR

6)
To check for compilation errors and view a log of events.
#puppet apply --noop --debug manifests/init.pp



Monday, June 1, 2015

python Permanently add a directory to PYTHONPATH sys.path

1)
Print all paths

#python -c "import sys; print sys.path"

2)
Find easy-install.pth

#find /usr -name *.pth

3)
Edit easy-install.pth and add your paths

#sudo vim /usr/local/lib/python2.7/dist-packages/easy-install.pth

4)
Print all paths again

#python -c "import sys; print sys.path"


AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'SplitResult'

Fix
====
* Download debian package of python-six_1.9 and install it.
#wget http://ftp.us.debian.org/debian/pool/main/s/six/python-six_1.9.0-3_all.deb
 

#sudo apt-get install gdebi
 

#sudo gdebi python-six_1.9.0-3_all.deb
 

#python -c "import six; print six.__version__"
1.9.0


Error
=====
2015-06-01 16:25:05.536 23761 CRITICAL ec2api [-] AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'SplitResult'
2015-06-01 16:25:05.536 23761 ERROR ec2api Traceback (most recent call last):
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/bin/ec2-api", line 10, in
2015-06-01 16:25:05.536 23761 ERROR ec2api     sys.exit(main())
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/home/saju/ec2-api/ec2api/cmd/api.py", line 34, in main
2015-06-01 16:25:05.536 23761 ERROR ec2api     server = service.WSGIService('ec2api', max_url_len=16384)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/home/saju/ec2-api/ec2api/service.py", line 78, in __init__
2015-06-01 16:25:05.536 23761 ERROR ec2api     self.app = self.loader.load_app(name)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/home/saju/ec2-api/ec2api/wsgi.py", line 514, in load_app
2015-06-01 16:25:05.536 23761 ERROR ec2api     return deploy.loadapp("config:%s" % self.config_path, name=name)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in loadapp
2015-06-01 16:25:05.536 23761 ERROR ec2api     return loadobj(APP, uri, name=name, **kw)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in loadobj
2015-06-01 16:25:05.536 23761 ERROR ec2api     return context.create()
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
2015-06-01 16:25:05.536 23761 ERROR ec2api     return self.object_type.invoke(self)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2015-06-01 16:25:05.536 23761 ERROR ec2api     **context.local_conf)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
2015-06-01 16:25:05.536 23761 ERROR ec2api     val = callable(*args, **kw)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 31, in urlmap_factory
2015-06-01 16:25:05.536 23761 ERROR ec2api     app = loader.get_app(app_name, global_conf=global_conf)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in get_app
2015-06-01 16:25:05.536 23761 ERROR ec2api     name=name, global_conf=global_conf).create()
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 362, in app_context
2015-06-01 16:25:05.536 23761 ERROR ec2api     APP, name=name, global_conf=global_conf)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 454, in get_context
2015-06-01 16:25:05.536 23761 ERROR ec2api     section)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 476, in _context_from_use
2015-06-01 16:25:05.536 23761 ERROR ec2api     object_type, name=use, global_conf=global_conf)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 406, in get_context
2015-06-01 16:25:05.536 23761 ERROR ec2api     global_conf=global_conf)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
2015-06-01 16:25:05.536 23761 ERROR ec2api     global_conf=global_conf)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 337, in _loadfunc
2015-06-01 16:25:05.536 23761 ERROR ec2api     return loader.get_context(object_type, name, global_conf)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 681, in get_context
2015-06-01 16:25:05.536 23761 ERROR ec2api     obj = lookup_object(self.spec)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 68, in lookup_object
2015-06-01 16:25:05.536 23761 ERROR ec2api     module = __import__(parts)
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/home/saju/ec2-api/ec2api/api/__init__.py", line 31, in
2015-06-01 16:25:05.536 23761 ERROR ec2api     from ec2api.api import apirequest
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/home/saju/ec2-api/ec2api/api/apirequest.py", line 23, in
2015-06-01 16:25:05.536 23761 ERROR ec2api     from ec2api.api import cloud
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/home/saju/ec2-api/ec2api/api/cloud.py", line 28, in
2015-06-01 16:25:05.536 23761 ERROR ec2api     from ec2api.api import address
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/home/saju/ec2-api/ec2api/api/address.py", line 22, in
2015-06-01 16:25:05.536 23761 ERROR ec2api     from ec2api.api import clients
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/home/saju/ec2-api/ec2api/api/clients.py", line 17, in
2015-06-01 16:25:05.536 23761 ERROR ec2api     from novaclient import client as novaclient
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/novaclient/client.py", line 38, in
2015-06-01 16:25:05.536 23761 ERROR ec2api     from oslo_utils import netutils
2015-06-01 16:25:05.536 23761 ERROR ec2api   File "/usr/local/lib/python2.7/dist-packages/oslo_utils/netutils.py", line 224, in
2015-06-01 16:25:05.536 23761 ERROR ec2api     class _ModifiedSplitResult(parse.SplitResult):
2015-06-01 16:25:05.536 23761 ERROR ec2api AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'SplitResult'
2015-06-01 16:25:05.536 23761 ERROR ec2api

How to import a module installed in /usr/local/lib/python2.7/dist-packages using pip

I am using ubuntu 14.04 and can see six in following locations.

/usr/lib/python2.7/dist-packages/six.py --- 1.5.2

/usr/local/lib/python2.7/dist-packages/six.py --- 1.9.0 (installed via #sudo pip install six)

I can't import six 1.9.0 which installed via pip
#python -c "import six; print six.__version__"
1.5.2
How to import a module installed in /usr/local/lib/python2.7/dist-packages using pip

Ans:
sounds like you installed it both from the platform's package manager and with pip. I recommend not doing that. (Dont' install with pip system-wide. Install in a virtualenv or with --user.)
There is no good way. Don't install packages system-wide except through your package manager.

Solution:

* Download debian package of python-six_1.9 and install it.
#wget http://ftp.us.debian.org/debian/pool/main/s/six/python-six_1.9.0-3_all.deb
 

#sudo apt-get install gdebi
 

#sudo gdebi python-six_1.9.0-3_all.deb
 

#python -c "import six; print six.__version__"
1.9.0












python /usr/bin/ld: cannot find -lz

Fix:
====
sudo apt-get install libz-dev

Error:
======

Running setup.py install for lxml
    /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'
      warnings.warn(msg)
    Building lxml version 3.4.4.
    Building without Cython.
    Using build configuration of libxslt 1.1.28
    building 'lxml.etree' extension
    x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/tmp/pip_build_root/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w
    x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -lxslt -lexslt -lxml2 -lz -lm -o build/lib.linux-x86_64-2.7/lxml/etree.so
    /usr/bin/ld: cannot find -lz
    collect2: error: ld returned 1 exit status
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

    Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-NhXXr1-record/install-record.txt --single-version-externally-managed --compile:
    /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'

/usr/bin/ld: cannot find -lz

collect2: error: ld returned 1 exit status

error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

ERROR: /bin/sh: 1: xslt-config: not found

Fix
=====
sudo apt-get install libxml2-dev
sudo apt-get install libxslt1-dev
sudo apt-get install python-dev

  
Error
=====
Running setup.py install for lxml
    /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'
      warnings.warn(msg)
    Building lxml version 3.4.4.
    Building without Cython.
    ERROR: /bin/sh: 1: xslt-config: not found
   
    ** make sure the development packages of libxml2 and libxslt are installed **
   

    Using build configuration of libxslt
    building 'lxml.etree' extension
    x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/tmp/pip_build_root/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w
    In file included from src/lxml/lxml.etree.c:239:0:
    /tmp/pip_build_root/lxml/src/lxml/includes/etree_defs.h:14:31: fatal error: libxml/xmlversion.h: No such file or directory
     #include "libxml/xmlversion.h"
                                   ^
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
    Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-MD0kwd-record/install-record.txt --single-version-externally-managed --compile:
    /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url'

  warnings.warn(msg)

Building lxml version 3.4.4.

Building without Cython.

ERROR: /bin/sh: 1: xslt-config: not found

** make sure the development packages of libxml2 and libxslt are installed **


In file included from src/lxml/lxml.etree.c:239:0:

/tmp/pip_build_root/lxml/src/lxml/includes/etree_defs.h:14:31: fatal error: libxml/xmlversion.h: No such file or directory

 #include "libxml/xmlversion.h"

compilation terminated.


error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

greenlet.h:8:20: fatal error: Python.h: No such file or directory

Fix:
#sudo apt-get install python-dev


 Error:
 ========
 Running setup.py install for greenlet
    building 'greenlet' extension
    x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c greenlet.c -o build/temp.linux-x86_64-2.7/greenlet.o
    In file included from greenlet.c:5:0:
    greenlet.h:8:20: fatal error: Python.h: No such file or directory
     #include
                        ^
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
    Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/greenlet/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-pd1And-record/install-record.txt --single-version-externally-managed --compile:
    running install

running build

running build_ext

building 'greenlet' extension

creating build

creating build/temp.linux-x86_64-2.7

x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c greenlet.c -o build/temp.linux-x86_64-2.7/greenlet.o

In file included from greenlet.c:5:0:

greenlet.h:8:20: fatal error: Python.h: No such file or directory

 #include

                    ^

compilation terminated.

error: command 'x86_64-linux-gnu-gcc' failed with exit status 1