Custom Search

Monday, November 30, 2015

vagrant virtualbox VM creation command VBoxManage

You can see this command if you run "ps -aux | grep VBoxManage" just after "vagrant up".

$/usr/lib/virtualbox/VBoxManage import /root/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-trusty64/20151117.0.0/virtualbox/box.ovf --vsys 0 --vmname ubuntu-cloudimg-trusty-vagrant-amd64_1448828432010_1244 --vsys 0 --unit 6 --disk /root/VirtualBox VMs/ubuntu-cloudimg-trusty-vagrant-amd64_1448828432010_1244/box-disk1.vmdk

Friday, November 27, 2015

vagrant multinode setup

1)
Ubuntu 14.04.3 LTS
#sudo -i

2)
#export no_proxy='127.0.0.1,169.254.169.254,localhost'
#export http_proxy='http://100.140.192.30:10000/'
#export https_proxy='http://100.140.192.30:10000/'


3)
#apt-get update

4)
a)

#echo "deb http://download.virtualbox.org/virtualbox/debian trusty contrib" | sudo tee -a /etc/apt/sources.list

b)
#cat /etc/apt/sources.list | grep virtualbox
deb http://download.virtualbox.org/virtualbox/debian trusty contrib


5)
#apt-get update
GPG error: http://download.virtualbox.org trusty InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 54422A4B98AB5139

6)
#apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 54422A4B98AB5139

Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.yjM7CTppoR --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys 54422A4B98AB5139
gpg: requesting key 98AB5139 from hkp server keyserver.ubuntu.com
gpg: key 98AB5139: public key "Oracle Corporation (VirtualBox archive signing key) " imported
gpg: Total number processed: 1
gpg:               imported: 1

7)
#apt-get update

8)
#apt-get install linux-headers-3.13.0-63-generic

9)
#apt-get install virtualbox-5.0

10)
Install vagrant
#wget --no-check-certificate https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.deb
#dpkg -i vagrant_1.7.4_x86_64.deb


11)
Install librarian-puppet
#apt-get install ruby
#gem install librarian-puppet-simple --no-ri --no-rdoc


12)
#apt-get install git
#git clone https://github.com/sajuptpm/puppet-mycloud


13)
#cd puppet-mycloud

14)
Install all puppet-mycloud dependencies from file "$pwd/Puppetfile" to "$pwd/modules"
#export pwd=`pwd`
#export git_protocol=https
#librarian-puppet install --puppetfile=$pwd/Puppetfile  --path=$pwd/modules


15)
Initialize Vagrant setup
#./vagrant_parallel_provision.sh initialize

* This command will provision a dhcp server and a separate vboxnet adapter.
* This command will provision "httpproxy1" server, that contains dhcp and proxy server.
* Before running the next command , ensure puppet has completed the provision of httpproxy server.

16)
Fix for Connection timeout:

httpproxy1: SSH address: 127.0.0.1:2222
httpproxy1: SSH username: vagrant
httpproxy1: SSH auth method: private key
httpproxy1: Warning: Connection timeout. Retrying...
httpproxy1: Warning: Connection timeout. Retrying...

* Check "/etc/resolv.conf" for correct "nameserver" entry.

17)
#bash vagrant_parallel_provision.sh up >> vagrant.log 2>&1 &

* The above commands , creates a new token for consul.The token and the "vboxnet" for your cluster can be found in the "vagrant_keys".

18)
In order to recreate the  vagrant environment:
#bash vagrant_parallel_provision.sh reset >> vagrant.log 2>&1 &
This will reprovision the entire setup in a new network and a new consul token.    

19)
In order to reprovision the environment by making changes in puppet-mycloud code;
#bash vagrant_parallel_provision.sh provision >> vagrant.log 2>&1 &

20)
In order to destroy the entire system
#bash vagrant_parallel_provision.sh destroy>> vagrant.log 2>&1 &

21)
In order to destroy and bring up a single server in the cluster
#vagrant destroy gcp1
#source vangrant_keys
#vagrant up gcp1


22)
In order to get the status of vagrant servers:
#vagrant status

23)
In order to login to a specific server:
#vagrant ssh gcp1

24)
In order to cleanup the environment:
#bash vagrant_parallel_provision.sh cleanup




Thursday, November 26, 2015

How to git cherry-pick fix conflicts and edit commit message

1)
* -e --> to edit commit message
$ git cherry-pick -e ffed960e08b1aa340e020decc7cb5a735e7185ce
error: could not apply ffed960... update to working version of contrail module
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add ' or 'git rm '
hint: and commit the result with 'git commit'

2)
$git diff
Fix conflict

3)
$ git add Puppetfile

4)
$ git status
On branch kilo_multinode_contv2
You are currently cherry-picking commit ffed960.
  (all conflicts fixed: run "git cherry-pick --continue")
  (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Changes to be committed:

    modified:   Puppetfile
    modified:   manifests/neutron/contrail/fip_pool.pp

5)
$ git cherry-pick -e --continue
[kilo_multinode_contv2 ba0dd5a] [bode] update to working version of contrail module
 Author: Dan Bode
 2 files changed, 7 insertions(+), 2 deletions(-)

6)
abort cherry-pick which has conflict and not required.
$ git cherry-pick --abort



Monday, November 23, 2015

under cloud neutron dnsmasq-dhcp debugging

* Here "40:ea:a7:33:7f:0e" is the MAC address of bare-metal node to be provisioned. We have enabled network boot (PXE boot) for the interface which has MAC address "40:ea:a7:33:7f:0e". So UC will get DHCP request from that interface of bare-metal node.

* Here "10.140.15.31" is the IP allocated by dnsmasq-dhcp server running in UC to bare-metal node.

* So we should able to capture following data while running "#nova boot" from UC and confirm that UC getting dhcp request from provisioned node.

1)
In UC:

root@uc1:~# tail -f /var/log/syslog | grep 40:ea:a7:33:7f:0e
Nov 23 02:15:50 uc1 dnsmasq-dhcp[42934]: DHCPDISCOVER(tapbe2e899a-38) 40:ea:a7:33:7f:0e
Nov 23 02:15:50 uc1 dnsmasq-dhcp[42934]: DHCPOFFER(tapbe2e899a-38) 10.140.15.31 40:ea:a7:33:7f:0e
Nov 23 02:15:54 uc1 dnsmasq-dhcp[42934]: DHCPREQUEST(tapbe2e899a-38) 10.140.15.31 40:ea:a7:33:7f:0e
Nov 23 02:15:54 uc1 dnsmasq-dhcp[42934]: DHCPACK(tapbe2e899a-38) 10.140.15.31 40:ea:a7:33:7f:0e host-10-140-15-31

2)
In UC:

a)
root@uc1:~# ip netns
exec qdhcp-f6e86850-32e9-4366-a03b-2b2a9ac481f0

b)
root@uc1:~# ip netns exec qdhcp-f6e86850-32e9-4366-a03b-2b2a9ac481f0 ifconfig
tapbe2e899a-38

c)
root@uc1:~# ip netns exec qdhcp-f6e86850-32e9-4366-a03b-2b2a9ac481f0 tcpdump -vv -i tapbe2e899a-38 | grep 40:ea:a7:33:7f:0e

    0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 40:ea:a7:33:7f:0e (oui Unknown), length 548, xid 0xa7117f0d, secs 4, Flags [Broadcast] (0x8000)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
    0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 40:ea:a7:33:7f:0e (oui Unknown), length 548, xid 0xa7117f0d, secs 4, Flags [Broadcast] (0x8000)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      source link-address option (1), length 8 (1): 40:ea:a7:33:7f:0e
      source link-address option (1), length 8 (1): 40:ea:a7:33:7f:0e
     
3)
In UC:

root@uc1:~# tcpdump -vv -i br-ctlplane | grep 40:ea:a7:33:7f:0e

tcpdump: listening on br-ctlplane, link-type EN10MB (Ethernet), capture size 65535 bytes
    0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 40:ea:a7:33:7f:0e (oui Unknown), length 548, xid 0xa7117f0d, secs 4, Flags [Broadcast] (0x8000)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
    0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 40:ea:a7:33:7f:0e (oui Unknown), length 548, xid 0xa7117f0d, secs 4, Flags [Broadcast] (0x8000)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      source link-address option (1), length 8 (1): 40:ea:a7:33:7f:0e
20:46:27.592585 ARP, Ethernet (len 6), IPv4 (len 4), Reply 10.140.15.31 is-at 40:ea:a7:33:7f:0e (oui Unknown), length 46
      source link-address option (1), length 8 (1): 40:ea:a7:33:7f:0e

4)
In UC:

root@uc1:~# tcpdump -vv -i phy-br-ctlplane | grep 40:ea:a7:33:7f:0e

tcpdump: WARNING: phy-br-ctlplane: no IPv4 address assigned
tcpdump: listening on phy-br-ctlplane, link-type EN10MB (Ethernet), capture size 65535 bytes

    0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 40:ea:a7:33:7f:0e (oui Unknown), length 548, xid 0xa7117f0d, secs 4, Flags [Broadcast] (0x8000)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
    0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from 40:ea:a7:33:7f:0e (oui Unknown), length 548, xid 0xa7117f0d, secs 4, Flags [Broadcast] (0x8000)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      Client-Ethernet-Address 40:ea:a7:33:7f:0e (oui Unknown)
      source link-address option (1), length 8 (1): 40:ea:a7:33:7f:0e
      source link-address option (1), length 8 (1): 40:ea:a7:33:7f:0e

Sunday, November 22, 2015

find IP and MAC addresses allocated by neutron dnsmasq-dhcp server

1)
# ip netns
qdhcp-f6e86850-32e9-4366-a03b-2b2a9ac481f0

2)
# ip netns exec qdhcp-f6e86850-32e9-4366-a03b-2b2a9ac481f0 ifconfig

tapbe2e899a-38 Link encap:Ethernet  HWaddr fb:14:3e:41:56:c1

3)
# ip netns exec qdhcp-f6e86850-32e9-4366-a03b-2b2a9ac481f0 arp-scan --interface tapbe2e899a-38 10.140.15.255/24




ironic how to delete node faster

1)
First set provision-state to deleted

$nova delete [vm-id]
$ironic node-set-provision-state [node-id] deleted

Example:
$ironic node-set-provision-state 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496 deleted

2)
Then delete the node
$ironic node-delete  [node-id]

* This help us to delete node faster


ironic find all compute and storage nodes

# for x in $(ironic node-list | awk '{ print $2 }'); do ironic node-show $x | grep -E 'cpu_arch|ipmi_address';echo ""; done

Result:
-----------

| properties             | {u'memory_mb': 131072, u'cpu_arch': u'x86_64.g1.compute', u'local_gb': |
|                        | u'provisioner', u'ipmi_address': u'100.214.122.81', u'ipmi_password':   |

| properties             | {u'memory_mb': 131072, u'cpu_arch': u'x86_64.g1.compute', u'local_gb': |
|                        | u'provisioner', u'ipmi_address': u'100.214.122.82', u'ipmi_password':   |



ironic find ipmi address of all ironic nodes

1)
$for x in $(ironic node-list | awk '{ print $2 }'); do ironic node-show $x | grep ipmi_address; done

OR

$for x in $(ironic node-list | awk '{ print $2 }');
do
ironic node-show $x | grep ipmi_address;
done


under cloud ironic setup How to register a node to ironic

1)
Find ILO of machine which you want to configure
#nmap -sP 100.214.122.64/27

* 100.214.122.64/27 --> ILO network

2)
Goto ILO Web Interface of the machine
https://100.214.122.93/

3)
How to do ILO Reset to get new ILO IP from DHCP Server (isc-dhcp-server) running on UC ?
Ans:
Goto ILO Web Interface of the machine
https://100.214.122.93/
Then Select "Network" --> iLO Dedicated Network Port --> IPv4 --> Select "Enable DHCPv4" ---> Then "Submit" and "Reset".

4)
How to Check Storage ?
Ans:
Goto ILO Web Interface of the machine
https://100.214.122.93/
Then Select "Information" --> System Information --> Storage

5)
How to Enable "Network Boot" for "NIC-4" or "em4" ?

a)
First Open Remote Console::
Goto ILO Web Interface of the machine
https://100.214.122.93/
Then Select "Remote Console" --> "Remote Console" --> "Java Integrated Remote Console (Java IRC)" --> launch

b)
Then Reboot::
Select "Power Switch" --> "Cold Boot"

c)
Then Goto BIOS Setup and Enable "Network Boot" for "NIC-4" ?

F9 --> Enter --> Wait for ROM Utility --> Select "System Options" --> "Embedded NICs" --> "NIC 4 Boots Options" --> Press "Enter" --> Select "Network Boot"

OR

F11 --> Enter --> F9 --->  Wait for ROM Utility --> Select "System Options" --> "Embedded NICs" --> "NIC 4 Boots Options" --> Press "Enter" --> Select "Network Boot"

d)
Then Goto BIOS Setup and Disable "Network Boot" for "NIC 1", "NIC 2" and "NIC 3".

6)
RAID Array Configuraion

a)
Open RAID Array Configuraion Interface
Reboot --> Then press "F9" --> Then press "F5"

b)
Create "RAID 1" for Compute Node.
Select "RAID Array Configuraion (Slot 1)" --> "Configure" --> "Create Array" --> Select All HDDs --> "Create Array" --> "RAID 1" --> "Create Logical Drive" --> Finish

b1)
Set Bootable Logical Drive/Volume.
"Set Bootable Logical Drive/Volume" --> "Local - Logical Drive 1" --> Select "Primary Bootable Logical Drive/Volume" --> OK --> Finish

c)
Create "RAID 0" for Storage Node.
Select "RAID Array Configuraion (Slot 1)" --> "Configure" --> "Create Array" --> Select All HDDs --> "Create Array" --> "RAID 0" --> "Create Logical Drive" --> Finish

c1)
Set Bootable Logical Drive/Volume.
"Set Bootable Logical Drive/Volume" --> "Local - Logical Drive 1" --> Select "Primary Bootable Logical Drive/Volume" --> OK --> Finish

7)
Register that machine "100.214.122.93" to ironic.

a)
Goto UC and source "openrc_admin".

b)
Check nodes which are already registered to ironic.
root@uc1:~#ironic node-list

c)
Register the node "100.214.122.93" to ironic.
root@uc1:~#/usr/bin/python2.7 -m jiocloud.enroll --ilo_username myusername --ilo_password mypassword --os_username admin --os_tenant openstack --os_password mypassword --os_auth_url https://identity-uc.mycloud.com:5000/v2.0 --nic 4 --ilo_address 100.214.122.93

d)
Check newly registered node
root@uc1:~#ironic node-list

8)
Configure the ironic node.

a)
Configure for compute (based on requirement).
root@uc1:~#ironic node-update 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496xx replace properties/cpu_arch=x86_64.g1.compute

* 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496xx --> form $ironic node-list

b)
Configure for storage (based on requirement).
root@uc1:~#ironic node-update 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496xx replace properties/cpu_arch=x86_64.g1.storage

c)
Set "maintenance" to False
root@uc1:~#ironic node-update 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496xx replace maintenance=False

d)
Verify updated configurations
root@uc1:~#ironic node-show 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496xx

9)
Test node provision:
a)
Power off the node:
root@uc1:~#ironic node-set-power-state 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496xx off

b)
provision the node:

For Compute Node:
root@uc1:~#nova boot --flavor g1.compute --image Ubuntu-trusty --key-name mykey1 --nic net-id=f6e86850-32e9-4366-a03b-2b2a9ac481f0 compute-node1
* Right now, we can't provision VM on a specific ironic node  with option "--availability-zone nova:xxxx"


For Storage Node:
root@uc1:~#nova boot --flavor g1.storage --image Ubuntu-trusty --key-name mykey1 --nic net-id=f6e86850-32e9-4366-a03b-2b2a9ac481f0 storage-node1

c)
Verify:
root@uc1:~#nova list
root@uc1:~#ironic node-list

10)
Power Off/On (Optional):
root@uc1:~#ironic node-set-power-state 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496xx off
root@uc1:~#ironic node-set-power-state 2d32d1dd-5363-47d4-a5d6-dc3a1fe7f496xx on

11)
Validate
#ironic node-validate [node-id]

12)
How to Delete nova vm and ironic node
$nova delete [vm-id]
$ironic node-set-provision-state [node-id] deleted
$ironic node-delete [node-id]


Register quanta node
#################


1)
Create chassis
#ironic chassis-create

2)
Create node
#ironic help node-create

#ironic node-create -d pxe_ipmitool -c 2c860c23-20ef-4b21-bbf3-656047beb85c -i ipmi_address=100.234.122.92 -i ipmi_username=admin -i ipmi_password=admin -i ipmi_terminal_port=0 -p cpus=16 -p memory_mb=131072 -p local_gb=678 -p cpu_arch=x86_64

#ironic node-list


3)
Create port

3a)
Find MAC address

a)
Open ILO web interface.
Then Open Remote console
Then reboot the system via ILO web interface "Remote Control" --> Server Power control --> Reset Server --> Click on "Perform action"
Then Press "F11 or F12", then "Ctrl + S" to enter into setup menu (Bios)

b)
Enable PXE boot (Network boot)
"Advanced" --> "OnBoard Device Configuration" --> "Onboard LAN Port" --> "Enable with PXE"

3b)
#ironic help port-create

#ironic port-create -a 2C:60:0C:28:3B:92 -n 0aa9238b-39ed-45a6-a803-f20755748882


#Ironic port update, if you want to change the MAC address
#ironic port-update 18580aeb-81e0-4936-baab-d23a36e3c01a replace address=2c:60:0c:28:3b:91

4)
Ironic node Update
#ironic node-update 0aa9238b-39ed-45a6-a803-f20755748882 replace properties/cpu_arch=x86_64.g1.compute

#ironic node-update 0aa9238b-39ed-45a6-a803-f20755748882 replace maintenance=False


5)
Power-off the system
#ironic node-set-power-state 0aa9238b-39ed-45a6-a803-f20755748882 off

6)
Test nova boot
#nova boot --flavor g1.compute --image Ubuntu-trusty --key-name mykey1 --nic net-id=f6e86850-32e9-4366-a03b-2b2a9ac481f0 compute-node1-quanta

#nova list

#ironic port-list








Friday, November 20, 2015

ipmitool find Serial Number of all servers in a network

1)
for ip in $(nmap -sP 192.204.122.64/27 | grep "Nmap scan report for" | awk '{print $5}')
do
echo "IP: "$ip
echo "SNO: "$(ipmitool -I lanplus -H  $ip -U username -P password fru | grep "Product Serial")
echo "--------"
done

2)

ipmitool example

1)
Print system infos

fru - Field-replaceable unit
fru - Print built-in FRU and scan SDR for FRU locators
$ipmitool -I lanplus -H  192.204.122.80 -U username -P mypassword fru

2)
Find Product Serial

$ipmitool -I lanplus -H  192.204.122.80 -U username -P mypassword fru | grep “Product Serial”

3)
mc - Management Controller status and global enables

$ipmitool -I lanplus -H  192.204.122.80 -U username -P mypassword mc

4)
warm or cold system Reset/Restart

$ipmitool -I lanplus -H  192.204.122.80 -U username -P mypassword mc reset warm
$ipmitool -I lanplus -H 
192.204.122.80 -U username -P mypassword mc reset cold

5)
List all users

$ipmitool -I lanplus -H  15.200.122.80 -U username -P mypassword user list

6)
Launch interactive IPMI shell

$ipmitool -I lanplus -H  15.200.122.80 -U username -P mypassword shell

7)
Commands:
    raw           Send a RAW IPMI request and print response
    i2c           Send an I2C Master Write-Read command and print response
    spd           Print SPD info from remote I2C device
    lan           Configure LAN Channels
    chassis       Get chassis status and set power state
    power         Shortcut to chassis power commands
    event         Send pre-defined events to MC
    mc            Management Controller status and global enables
    sdr           Print Sensor Data Repository entries and readings
    sensor        Print detailed sensor information
    fru           Print built-in FRU and scan SDR for FRU locators
    gendev        Read/Write Device associated with Generic Device locators sdr
    sel           Print System Event Log (SEL)
    pef           Configure Platform Event Filtering (PEF)
    sol           Configure and connect IPMIv2.0 Serial-over-LAN
    tsol          Configure and connect with Tyan IPMIv1.5 Serial-over-LAN
    isol          Configure IPMIv1.5 Serial-over-LAN
    user          Configure Management Controller users
    channel       Configure Management Controller channels
    session       Print session information
    dcmi          Data Center Management Interface
    sunoem        OEM Commands for Sun servers
    kontronoem    OEM Commands for Kontron devices
    picmg         Run a PICMG/ATCA extended cmd
    fwum          Update IPMC using Kontron OEM Firmware Update Manager
    firewall      Configure Firmware Firewall
    delloem       OEM Commands for Dell systems
    shell         Launch interactive IPMI shell
    exec          Run list of commands from file
    set           Set runtime variable for shell and exec
    hpm           Update HPM components using PICMG HPM.1 file
    ekanalyzer    run FRU-Ekeying analyzer using FRU files
    ime           Update Intel Manageability Engine Firmware

8)











Wednesday, November 18, 2015

nmap and arp-scan

1)
nmap
$nmap -sP 15.200.122.64/27

2)
arp-scan
$arp-scan --interface em2 15.200.122.64/27



Friday, November 13, 2015

How to postgresql find and kill a hanging query

1)
Login as superuser
$ sudo -u postgres psql

2)
Find "procpid" of your problematic query by running
postgres=# select * from pg_stat_activity;

3)
Kill your problematic query
postgres=# select pg_terminate_backend(procpid);

Tuesday, November 10, 2015

How to enable tray icons bar Ubuntu 16.04 15.04 14.04 (Taskbar)


https://extensions.gnome.org/extension/615/appindicator-support/


https://extensions.gnome.org/extension/495/topicons/


Sunday, November 8, 2015

Probability Density (PDF) and Cumulative Distribution (CDF) Functions


https://www.youtube.com/watch?v=Q0auB05R3Vs

https://www.youtube.com/watch?v=1xQ4r2gcW3c

https://www.youtube.com/watch?v=S4r3J-nXlOA


Thursday, November 5, 2015

quanta node RAID and BIOS Access

1)
How to reboot quanta node.
Open ILO Web UI --> Remote Control --> Server Power Control -->  Reset Server --> Perform Action

2)
Press "F2 or F11 or F12", then "Ctrl+I" to Create/Delete RAID Volume

3)
Press "F11 or F12", then "Ctrl + S" to enter into setup menu (Bios)
Some time we need to press only F11.

a)
Enable PXE boot (Network boot)
"Advanced" --> "OnBoard Device Configuration" --> "Onboard LAN Port" --> "Enable with PXE"

4)
Press "F2" to enter into setup menu (Advanced Bios)

5)
Press "F11 or F12", then "Ctrl + C" to enter into LSI Corporation Utility

6)
How to Enable DHCP for ILO interface
Goto --> http://100.224.222.92 (u/p:admin) --> Configuration --> Network --> IPv4 Configuration --> Use DHC.

7)
How to install OS from ISO file

Remote console --> media --> virtual media wizard --> ISO Image (browse iso file) --> Click on "connect" button.
Then reboot quanta node.

8)


Monday, November 2, 2015

contrail contrail_vrouter_api


https://github.com/Juniper/contrail-controller/tree/master/src/vnsw/contrail-vrouter-api


https://github.com/Juniper/contrail-controller/blob/master/src/vnsw/agent/openstack/instance_service.thrift

1)
In CP node
/usr/lib/python2.7/dist-packages/contrail_vrouter_api

2)
$ ls /usr/lib/python2.7/dist-packages/contrail_vrouter_api
gen_py  __init__.py  __init__.pyc  tests  vrouter_api.py  vrouter_api.pyc

3)
$ ls /usr/lib/python2.7/dist-packages/contrail_vrouter_api/gen_py/
__init__.py  __init__.pyc  instance_service

4)
$ ls /usr/lib/python2.7/dist-packages/contrail_vrouter_api/gen_py/instance_service/
constants.py  constants.pyc  __init__.py  __init__.pyc  InstanceService.py  InstanceService.pyc  ttypes.py  ttypes.pyc