Wednesday, December 31, 2014

Happy New Year 2015

Wish every one a Happy New Year 2015.

Tuesday, September 30, 2014

Level Up Campaign- Packt Publishing

Take your skills to the next level 

 For the next 7 days ALL eBooks and Videos are just $10 or less -- the more you choose to learn, the more you save:
  • Any 1 or 2 eBooks/Videos -- $10 each

  • Any 3-5 eBooks/Videos -- $8 each

  • Any 6 or more eBooks/Videos -- $6 each

The discounts above are automatically applied in your cart when you add the correct number of titles. Offer ends October 2nd.
Explore more here Level Up !

Monday, June 30, 2014

Happy News for Readers - Packt Pub offers $10 Discount on Books on celebrating 10 glorious years

Would like to pass on the below message to Readers.
Packt Publishing are celebrating 10 glorious years of publishing books. To celebrate this huge milestone, from June 26th Packt is offering all of its eBooks and Videos at just $10 each for 10 days. This promotion covers every title and customers can stock up on as many copies as they like until July 5th.

Explore this offer here http://bit.ly/1m1PPqj

Friday, April 18, 2014

Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Big Data Oracle NoSQL in No Time - It is time to Load Data for a simple Use Case

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

There are a lot of reference to NoSQL Use Case but I wanted to make it simple. Though I am not a developer but thanks to my unix scripting skills.

So here is what I am planning to make

  • create a schema for storing server cpu details from mpstat command
  • storing it every minute
  • on 4 nodes
  • then some dashboards
AVRO Schema Design

Here I am creating an avro schema that can hold the date and time with the values from mpstat

cpudata.avsc
"type": "record", 
"name": "cpudata", 
"namespace":"avro", 
"fields": [ 
{"name": "yyyy", "type": "int", "default": 0},
{"name": "mm", "type": "int", "default": 0}, 
{"name": "dd", "type": "int", "default": 0}, 
{"name": "hh", "type": "int", "default": 0}, 
{"name": "mi", "type": "int", "default": 0}, 
{"name": "user", "type": "float", "default": 0}, 
{"name": "nice", "type": "float", "default": 0},
{"name": "sys", "type": "float", "default": 0},
{"name": "iowait", "type": "float", "default": 0},
{"name": "irq", "type": "float", "default": 0},
{"name": "soft", "type": "float", "default": 0},
{"name": "steal", "type": "float", "default": 0},
{"name": "idle", "type": "float", "default": 0},
{"name": "intr", "type": "float", "default": 0}

Now I am adding the schema to the store

$ java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000
kv-> ddl add-schema -file cpudata.avsc
Added schema: avro.cpudata.1
kv-> show schema
avro.cpudata
  ID: 1  Modified: 2014-04-18 00:29:58 UTC, From: server1
kv->

To load the data I am creating a shell script which will create the put kv -key command in a temporary file.
Later I load the temporary file immediately into the store
This is automated via a crontab job entry that runs every minute.
So this program is going to capture the server cpu metrics every minute. 

$ cat cpuload.sh
export KVHOME=$KVBASE/server2/oraclesoftware/kv-3.0.5
echo `hostname` `date +"%d-%m-%Y-%H-%M-%S"` `date +"%-d"` `date +"%-m"` `date +"%Y"` `date +"%-H"` `date +"%-M"` `mpstat|tail -1`|awk '{print "put kv -key /cpudata/"$1"/"$2" -value \"{\\\"yyyy\\\":"$5",\\\"mm\\\":"$4",\\\"dd\\\":"$3",\\\"hh\\\":"$6",\\\"mi\\\":"$7",\\\"user\\\":"$10",\\\"nice\\\":"$11",\\\"sys\\\":"$12",\\\"iowait\\\":"$13",\\\"irq\\\":"$14",\\\"soft\\\":"$15",\\\"steal\\\":"$16",\\\"idle\\\":"$17",\\\"intr\\\":"$18" }\" -json avro.cpudata"}' > /tmp/1.load
java -jar $KVHOME/lib/kvcli.jar -host server1 -port 5000 -store mystore load -file /tmp/1.load
$
$ crontab -l
* * * * * /oraclenosql/work/cpuload.sh
$

Since the job has been scheduled , I am testing the records if they are getting loaded

kv-> get kv -key /cpudata -all -keyonly
/cpudata/server1/18-04-2014-03-35-02
/cpudata/server1/18-04-2014-03-36-02
2 Keys returned.
kv->

Since the program has just started it has two records now

kv-> aggregate -count -key /cpudata
count: 2
kv->

A detailed listing of the two records

kv-> get kv -key /cpudata/server1 -all
/cpudata/server1/18-04-2014-03-37-02
{
  "yyyy" : 2014,
  "mm" : 4,
  "dd" : 18,
  "hh" : 3,
  "mi" : 37,
  "user" : 0.8799999952316284,
  "nice" : 1.350000023841858,
  "sys" : 0.38999998569488525,
  "iowait" : 1.0399999618530273,
  "irq" : 0.0,
  "soft" : 0.009999999776482582,
  "steal" : 0.03999999910593033,
  "idle" : 96.30000305175781,
  "intr" : 713.0399780273438
}
/cpudata/server1/18-04-2014-03-35-02
{
  "yyyy" : 2014,
  "mm" : 4,
  "dd" : 18,
  "hh" : 3,
  "mi" : 35,
  "user" : 0.8799999952316284,
  "nice" : 1.350000023841858,
  "sys" : 0.38999998569488525,
  "iowait" : 1.0399999618530273,
  "irq" : 0.0,
  "soft" : 0.009999999776482582,
  "steal" : 0.03999999910593033,
  "idle" : 96.30000305175781,
  "intr" : 713.0399780273438
}

Now I am going to sleep and next day going to have some fun. With 24 hours completed the store now has all the CPU metric for the whole day. Let me try some aggregate commands.

Average CPU usage 
kv-> aggregate -key /cpudata/server1 -avg user
avg(user): 0.8799999952316284

kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr
avg(user): 0.8799999952316284
avg(nice): 1.350000023841858
avg(sys): 0.38999998569488525
avg(iowait): 1.0399999618530273
avg(irq): 0.0
avg(soft): 0.009999999776482582
avg(steal): 0.03999999910593033
avg(idle): 96.30000305175781
avg(intr): 713.0599822998047
kv->

Let me bring a range and see the hourly usage

kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-04 -end 18-04-2014-05

avg(user): 0.8799999952316284
avg(nice): 1.350000023841858
avg(sys): 0.38999998569488525
avg(iowait): 1.0399999618530273
avg(irq): 0.0
avg(soft): 0.009999999776482582
avg(steal): 0.03999999910593033
avg(idle): 96.30000305175781
avg(intr): 713.0399780273438
kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-03-35-02 -end 18-04-2014-03-40-02

avg(user): 0.8799999952316284
avg(nice): 1.350000023841858
avg(sys): 0.38999998569488525
avg(iowait): 1.0399999618530273
avg(irq): 0.0
avg(soft): 0.009999999776482582
avg(steal): 0.03999999910593033
avg(idle): 96.30000305175781
avg(intr): 713.0849914550781
kv->


Interesting ?

Time for some dashboards


Hourly CPU Idle Metric 

$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
> do
> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst
> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst
> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i idle|awk '{print $2 }'`
> done
18-04-2014-01 - 96.27333068847656
18-04-2014-02 - 96.27999877929688
18-04-2014-03 - 96.30000305175781
18-04-2014-04 - 96.30000305175781
18-04-2014-05 - 96.30000305175781
18-04-2014-06 - 96.30000305175781
18-04-2014-07 - 96.28433303833008
18-04-2014-08 - 96.2699966430664
18-04-2014-09 - 96.2699966430664
18-04-2014-10 - 96.27333068847656
18-04-2014-11 - 96.27999877929688
18-04-2014-12 - 96.2870002746582
18-04-2014-13 - 96.29016761779785
18-04-2014-14 - 96.29683570861816
18-04-2014-15 - 96.302001953125
18-04-2014-16 - 96.30999755859375
18-04-2014-17 - 96.31849937438965
18-04-2014-18 - 96.32483406066895
18-04-2014-19 - 96.33000183105469
18-04-2014-20 - 96.3331667582194
18-04-2014-21 - 96.28135165652714
18-04-2014-22 - 96.27333068847656
18-04-2014-23 - 96.27999877929688
18-04-2014-24 - 96.27333068847656
$



Hourly CPU User Metric 

$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
> do
> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst
> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst
> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i user|awk '{print $2 }'`
> done
18-04-2014-01 - 0.8899999856948853
18-04-2014-02 - 0.8899999856948853
18-04-2014-03 - 0.8799999952316284
18-04-2014-04 - 0.8799999952316284
18-04-2014-05 - 0.8799999952316284
18-04-2014-06 - 0.8799999952316284
18-04-2014-07 - 0.8819999933242798
18-04-2014-08 - 0.8906666517257691
18-04-2014-09 - 0.8899999856948853
18-04-2014-10 - 0.8899999856948853
18-04-2014-11 - 0.8899999856948853
18-04-2014-12 - 0.8899999856948853
18-04-2014-13 - 0.8899999856948853
18-04-2014-14 - 0.8899999856948853
18-04-2014-15 - 0.8899999856948853
18-04-2014-16 - 0.8899999856948853
18-04-2014-17 - 0.8899999856948853
18-04-2014-18 - 0.8899999856948853
18-04-2014-19 - 0.8899999856948853
18-04-2014-20 - 0.8899999856948853
18-04-2014-21 - 0.8921276432402591
18-04-2014-22 - 0.8799999952316284
18-04-2014-23 - 0.8799999952316284
18-04-2014-24 - 0.8899999856948853
$





Hourly CPU IOWAIT Metric

$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
> do
> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst
> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst
> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i iowait|awk '{print $2 }'`
> done
18-04-2014-01 - 1.0907692328477516
18-04-2014-02 - 1.0499999523162842
18-04-2014-03 - 1.0399999618530273
18-04-2014-04 - 1.0399999618530273
18-04-2014-05 - 1.0373332977294922
18-04-2014-06 - 1.0299999713897705
18-04-2014-07 - 1.0403332948684691
18-04-2014-08 - 1.0499999523162842
18-04-2014-09 - 1.0499999523162842
18-04-2014-10 - 1.0499999523162842
18-04-2014-11 - 1.0499999523162842
18-04-2014-12 - 1.0499999523162842
18-04-2014-13 - 1.0481666207313538
18-04-2014-14 - 1.0499999523162842
18-04-2014-15 - 1.0449999570846558
18-04-2014-16 - 1.0399999618530273
18-04-2014-17 - 1.0399999618530273
18-04-2014-18 - 1.0399999618530273
18-04-2014-19 - 1.0399999618530273
18-04-2014-20 - 1.0398332953453064
18-04-2014-21 - 1.0907692328477516
18-04-2014-22 - 1.0499999523162842
18-04-2014-23 - 1.0907692328477516
18-04-2014-24 - 1.0499999523162842
$



So this NoSQL Use Case is very simple. I have scheduled the jobs to run on another couple of servers so that my store can be used to analyze CPU metric for all my hosted servers. The avro schema can be expanded to have many more information.

Monday, April 14, 2014

Big Data Oracle NoSQL in No Time - It is time to Upgrade

Big Data Oracle NoSQL in No Time - It is time to Upgrade 
Oracle NoSQL upgrade from 11gR2 to 12cR1 ( 2.0 to 3.0 )

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

The upgrade is simple , nosql is brilliant with its simplicity.














The below are the steps

  • verify prerequisite - here we verify that the storage nodes are meeting the required prerequisite for upgrading.
  • show upgrade-order - here we get the list of storage nodes in order that can be upgraded
  • replace the software - unzip the new software
  • verify upgrade - we verify if the storage nodes are upgraded to the version that we downloaded.
In our scenario we have 4x4 deployment topology with one admin node and here we will upgrade from 11gR2 to 12cR1
First let us upgrade on of the admin node.


$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server1/storage
$ cd $KVBASE/server1/oraclesoftware
$ cp -Rf $KVBASE/stage/kv-3.0.5 .
$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-3.0.5
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server1/storage &
$ nohup: appending output to `nohup.out'
$ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
kv-> verify prerequisite
Verify: starting verification of mystore based upon topology sequence #84
30 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:33:50 UTC
See server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messages
Verify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verification complete, no violations.
kv->
kv-> show upgrade-order
Calculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23
sn3
sn4
sn2
kv->

In our case the upgrade order is determined to be sn3,sn4 and then sn2. We can verify the upgrade order at each stage.

Now let us upgrade SN3

$ export KVHOME=$KVBASE/server3/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server3/storage
$
$ cd $KVBASE/server3/oraclesoftware
$ cp -Rf $KVBASE/stage/kv-3.0.5 .
$ export KVHOME=$KVBASE/server3/oraclesoftware/kv-3.0.5
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server3/storage &
$

kv->  verify prerequisite
Verify: starting verification of mystore based upon topology sequence #84
30 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:40:31 UTC
See server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messages
Verify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verification complete, no violations.
kv->
kv-> show upgrade-order
Calculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23
sn4
sn2

kv->


Now let us upgrade SN4

$  export KVHOME=$KVBASE/server4/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server4/storage
$
$ cd $KVBASE/server4/oraclesoftware
$ cp -Rf $KVBASE/stage/kv-3.0.5 .
$ export KVHOME=$KVBASE/server4/oraclesoftware/kv-3.0.5
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server4/storage &
$

kv-> verify prerequisite
Verify: starting verification of mystore based upon topology sequence #84
30 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:42:30 UTC
See server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messages
Verify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 11gR2.2.0.39 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verification complete, no violations.
kv->

kv-> show upgrade-order
Calculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23
sn2

kv->

Now let us upgrade the last pending storage node SN2

$ export KVHOME=$KVBASE/server2/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE/server2/storage
$
$ cd $KVBASE/server2/oraclesoftware
$ cp -Rf $KVBASE/stage/kv-3.0.5 .
$ export KVHOME=$KVBASE/server2/oraclesoftware/kv-3.0.5
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE/server2/storage &
$

kv-> verify prerequisite
Verify: starting verification of mystore based upon topology sequence #84
30 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:44:12 UTC
See server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messages
Verify prerequisite: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify prerequisite: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verification complete, no violations.
kv->

kv-> show upgrade-order
Calculating upgrade order, target version: 12.1.3.0.5, prerequisite: 11.2.2.0.23
There are no nodes that need to be upgraded
kv->

Let us quickly verify the upgrade process

kv-> verify upgrade
Verify: starting verification of mystore based upon topology sequence #84
30 partitions and 4 storage nodes. Version: 12.1.3.0.5 Time: 2014-04-14 08:44:27 UTC
See server1:$KVBASE/server1/storage/mystore/log/mystore_{0..N}.log for progress messages
Verify upgrade: Storage Node [sn3] on server3:5200    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify upgrade: Storage Node [sn4] on server4:5300    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify upgrade: Storage Node [sn1] on server1:5000    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407
Verify upgrade: Storage Node [sn2] on server2:5100    Zone: [name=datacenter1 id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.0.5 2014-03-27 10:00:25 UTC  Build id: cc4ac0e66407

Verification complete, no violations.
kv->


As a Oracle DBA I know the complexity in upgrade but upgrading NoSQL is different.


Friday, April 11, 2014

Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Let us expand our environment.
If your NoSQL store has write bottleneck then adding a storage node would help.
If your NoSQL store had read bottlenech then increasing the replication factor would help.

Steps to make 3x4 (to increase the write throughput)

kv-> plan deploy-sn -dc dc1 -port 5300 -wait -host server4
kv-> plan change-parameters -service sn4 -wait -params capacity=3
kv-> topology clone -current -name 3x4
kv-> topology change-repfactor -name 3x4 -pool AllStorageNodes -rf 4 -dc dc1
kv-> topology preview -name 3x4
kv-> plan deploy-topology -name 3x4 -wait



Steps to make 4x4 (to increase the read throughput)

kv-> plan change-parameters -service sn1 -wait -params capacity=4
kv-> plan change-parameters -service sn2 -wait -params capacity=4
kv-> plan change-parameters -service sn3 -wait -params capacity=4
kv-> plan change-parameters -service sn4 -wait -params capacity=4
kv-> topology clone -current -name 4x4
kv-> topology redistribute -name 4x4 -pool AllStorageNodes
kv-> topology preview -name 4x4
kv-> plan deploy-topology -name 4x4 -wait



Thursday, April 10, 2014

Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

Big Data Oracle NoSQL in No Time - Smoke Testing Part 6

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Oracle NoSQL can be smoke tested in different ways but the most common one is the ping command and a simple java program.
Customers can design their own somke testing program as needed.

Let us compile what is in the documentation
$ export KVBASE=/oraclenosql/lab
$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ cd $KVHOME
$ javac -cp lib/kvclient.jar:examples examples/hello/*.java
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
Hello Big Data World!
$

With all the three storage nodes up and running the below is the output of ping command and the java program

$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
Pinging components of store mystore based upon topology sequence #67
mystore comprises 30 partitions and 3 Storage Nodes
Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 255 haPort: 5011
        Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5013
        Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5012
Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5112
        Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5111
        Rep Node [rg2-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5110
Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5210
        Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5212
        Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5211
$
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
Hello Big Data World!
$


Let us take down the third storage node. You will see the ping confirming that the third storage node is unreachable and the java program works fine with the storage nodes.

$ export KVHOME=$KVBASE//server3/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE//server3/storage
$
$ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
Pinging components of store mystore based upon topology sequence #67
mystore comprises 30 partitions and 3 Storage Nodes
Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 255 haPort: 5011
        Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 137 haPort: 5013
        Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 135 haPort: 5012
Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 137 haPort: 5112
        Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 255 haPort: 5111
        Rep Node [rg2-rn1]      Status: RUNNING,MASTER at sequence number: 135 haPort: 5110
Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1] UNREACHABLE
        Rep Node [rg3-rn1]      Status: UNREACHABLE
        Rep Node [rg2-rn3]      Status: UNREACHABLE
        Rep Node [rg1-rn3]      Status: UNREACHABLE
$
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
Hello Big Data World!
$

Let us take down the second storage node. With this we are up and running with one storage node and two are down.
It is very clear from the java program that the nosql store is not functional because the default commit policy is simple majority which requires two replicas.

$ export KVHOME=$KVBASE//server2/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar stop -root $KVBASE//server2/storage
$
$ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
Pinging components of store mystore based upon topology sequence #67
mystore comprises 30 partitions and 3 Storage Nodes
Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 257 haPort: 5011
        Rep Node [rg3-rn2]      Status: RUNNING,UNKNOWN at sequence number: 137 haPort: 5013
        Rep Node [rg2-rn2]      Status: RUNNING,UNKNOWN at sequence number: 135 haPort: 5012
Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1] UNREACHABLE
        Rep Node [rg3-rn3]      Status: UNREACHABLE
        Rep Node [rg1-rn2]      Status: UNREACHABLE
        Rep Node [rg2-rn1]      Status: UNREACHABLE
Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1] UNREACHABLE
        Rep Node [rg3-rn1]      Status: UNREACHABLE
        Rep Node [rg2-rn3]      Status: UNREACHABLE
        Rep Node [rg1-rn3]      Status: UNREACHABLE
$
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
oracle.kv.DurabilityException: (JE 5.0.74) Commit policy: SIMPLE_MAJORITY required 2 replicas. But none were active with this master. (11.2.2.0.39)
Fault class name: com.sleepycat.je.rep.InsufficientReplicasException
Remote stack trace: com.sleepycat.je.rep.InsufficientReplicasException: (JE 5.0.74) Commit policy: SIMPLE_MAJORITY required 2 replicas. But none were active with this master.
$

By bring up storage nodes 2 & 3 our store is operational.

$ export KVHOME=$KVBASE//server3/oraclesoftware/kv-2.0.39
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE//server3/storage &
$ export KVHOME=$KVBASE//server2/oraclesoftware/kv-2.0.39
$ nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVBASE//server2/storage &

$ java -jar $KVHOME/lib/kvstore.jar ping -port 5000 -host server1
Pinging components of store mystore based upon topology sequence #67
mystore comprises 30 partitions and 3 Storage Nodes
Storage Node [sn1] on server1:5000    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg1-rn1]      Status: RUNNING,REPLICA at sequence number: 265 haPort: 5011
        Rep Node [rg3-rn2]      Status: RUNNING,MASTER at sequence number: 141 haPort: 5013
        Rep Node [rg2-rn2]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5012
Storage Node [sn2] on server2:5100    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn3]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5112
        Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 265 haPort: 5111
        Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5110
Storage Node [sn3] on server3:5200    Datacenter: datacenter1 [dc1]    Status: RUNNING   Ver: 11gR2.2.0.39 2013-04-23 08:30:07 UTC  Build id: b205fb13eb4e
        Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 141 haPort: 5210
        Rep Node [rg2-rn3]      Status: RUNNING,MASTER at sequence number: 141 haPort: 5212
        Rep Node [rg1-rn3]      Status: RUNNING,MASTER at sequence number: 265 haPort: 5211
$

$ export KVHOME=$KVBASE//server1/oraclesoftware/kv-2.0.39
$ java -cp $KVHOME/lib/kvclient.jar:$KVHOME/examples hello.HelloBigDataWorld -port 5000 -host server1 -store mystore
Hello Big Data World!
$

Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5

Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

With the current 3x1 setup the NoSQL store is write efficient. In order to make it read efficient the replication factor has to be increased which internally creates more copies of the data to improve performance.

In the below scenario we are going to increase the replication from 1 to 3 to the  existing topology to make it read friendly.


export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
kv-> show topology
store=mystore  numPartitions=30 sequence=60
  dc=[dc1] name=datacenter1 repFactor=1

  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
    [rg2-rn1] RUNNING
          No performance info available
  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING
    [rg3-rn1] RUNNING
          No performance info available

  shard=[rg1] num partitions=10
    [rg1-rn1] sn=sn1
  shard=[rg2] num partitions=10
    [rg2-rn1] sn=sn2
  shard=[rg3] num partitions=10
    [rg3-rn1] sn=sn3

kv-> plan change-parameters -service sn1 -wait -params capacity=3
Executed plan 8, waiting for completion...
Plan 8 ended successfully
kv-> plan change-parameters -service sn2 -wait -params capacity=3
Executed plan 9, waiting for completion...
Plan 9 ended successfully
kv-> plan change-parameters -service sn3 -wait -params capacity=3
Executed plan 10, waiting for completion...
Plan 10 ended successfully
kv-> topology clone -current -name 3x3
Created 3x3
kv-> topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1
Changed replication factor in 3x3
kv-> topology preview -name 3x3
Topology transformation from current deployed topology to 3x3:
Create 6 RNs

shard rg1
  2 new RNs : rg1-rn2 rg1-rn3
shard rg2
  2 new RNs : rg2-rn2 rg2-rn3
shard rg3
  2 new RNs : rg3-rn2 rg3-rn3

kv-> plan deploy-topology -name 3x3 -wait
Executed plan 11, waiting for completion...
Plan 11 ended successfully
kv-> show topology
store=mystore  numPartitions=30 sequence=67
  dc=[dc1] name=datacenter1 repFactor=3

  sn=[sn1]  dc=dc1 server1:5000 capacity=3 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
    [rg2-rn2] RUNNING
          No performance info available
    [rg3-rn2] RUNNING
          No performance info available
  sn=[sn2]  dc=dc1 server2:5100 capacity=3 RUNNING
    [rg1-rn2] RUNNING
          No performance info available
    [rg2-rn1] RUNNING
          No performance info available
    [rg3-rn3] RUNNING
          No performance info available
  sn=[sn3]  dc=dc1 server3:5200 capacity=3 RUNNING
    [rg1-rn3] RUNNING
          No performance info available
    [rg2-rn3] RUNNING
          No performance info available
    [rg3-rn1] RUNNING
          No performance info available

  shard=[rg1] num partitions=10
    [rg1-rn1] sn=sn1
    [rg1-rn2] sn=sn2
    [rg1-rn3] sn=sn3
  shard=[rg2] num partitions=10
    [rg2-rn1] sn=sn2
    [rg2-rn2] sn=sn1
    [rg2-rn3] sn=sn3
  shard=[rg3] num partitions=10
    [rg3-rn1] sn=sn3
    [rg3-rn2] sn=sn1
    [rg3-rn3] sn=sn2

kv->



So what we have done ?


plan change-parameters -service sn1 -wait -params capacity=3
plan change-parameters -service sn2 -wait -params capacity=3
plan change-parameters -service sn3 -wait -params capacity=3
We are increasing the capacity from 1 to 3 with the change-parameters command.

topology clone -current -name 3x3
We are cloning the current topology with the new name 3x3

topology change-repfactor -name 3x3 -pool AllStorageNodes -rf 3 -dc dc1
We are using the change-repfactor method to modify the replication factor to 3. The replication factor cannot be changed for this topology after executing this command.

You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x3 distributions.

Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4

Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Previously we setup 1x1 topology and now we are going to move into a 3x1 topology.
By doing so we are going to increase the data that is distributed in the NoSQL Store. The main advantage of doing so will increase the write throughput and this is achieved using the redistribute command. During the redistribution partitions are distributed across the new shards and the end result is you have more replication nodes that will help your write operations.

In the below scenario we are going to add two replication nodes to the existing topology to make it write friendly.

$ export KVBASE=/oraclenosql/lab
$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
kv-> plan deploy-sn -dc dc1 -port 5100 -wait -host server2
Executed plan 5, waiting for completion...
Plan 5 ended successfully
kv-> plan deploy-sn -dc dc1 -port 5200 -wait -host server3
Executed plan 6, waiting for completion...
Plan 6 ended successfully
kv-> show topology
store=mystore  numPartitions=30 sequence=36
  dc=[dc1] name=datacenter1 repFactor=1

  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING

  shard=[rg1] num partitions=30
    [rg1-rn1] sn=sn1

kv->
kv-> topology clone -current -name 3x1
Created 3x1
kv-> topology redistribute -name 3x1 -pool AllStorageNodes
Redistributed: 3x1
kv-> topology preview -name 3x1
Topology transformation from current deployed topology to 3x1:
Create 2 shards
Create 2 RNs
Migrate 20 partitions

shard rg2
  1 new RN : rg2-rn1
  10 partition migrations
shard rg3
  1 new RN : rg3-rn1
  10 partition migrations

kv-> plan deploy-topology -name 3x1 -wait
Executed plan 7, waiting for completion...
Plan 7 ended successfully
kv-> show topology
store=mystore  numPartitions=30 sequence=60
  dc=[dc1] name=datacenter1 repFactor=1

  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available
  sn=[sn2]  dc=dc1 server2:5100 capacity=1 RUNNING
    [rg2-rn1] RUNNING
          No performance info available
  sn=[sn3]  dc=dc1 server3:5200 capacity=1 RUNNING
    [rg3-rn1] RUNNING
          No performance info available

  shard=[rg1] num partitions=10
    [rg1-rn1] sn=sn1
  shard=[rg2] num partitions=10
    [rg2-rn1] sn=sn2
  shard=[rg3] num partitions=10
    [rg3-rn1] sn=sn3

kv->



So what we have done ?

plan deploy-sn -dc dc1 -port 5100 -wait -host server2
We are adding the second storage node into the datacenter dc1 which already has one storage node.

plan deploy-sn -dc dc1 -port 5200 -wait -host server3
We are adding one more storage node into the datacenter dc1 making it three storage nodes.

topology clone -current -name 3x1
We are cloning the existing 1x1 topology to a new candidate topology 3x1. This topology will be used for the change operations that is planned to be performed.

topology redistribute -name 3x1 -pool AllStorageNodes
We are redistributing the partitions on to the 3x1 topology

topology preview -name 3x1
We can preview the topology before deploying it to the store.

plan deploy-topology -name 3x1 -wait
We are approving the deployment plan 3x1 and the deployment will take time to complete as it depends on the store size.

You can use the show topology command to verify if the storage nodes are up and running. Alternatively , use the web interface to check the storage nodes 3x1 distributions. 

Thursday, March 20, 2014

Packt Publishing now Buy One, Get One Free

Buy One, Get One Free on all of #Packt’s 2000 eBooks! bit.ly/1j26nPN #Packt2k


Tuesday, March 11, 2014

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2

Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Now let us quickly create 1x1 Topology

$ export KVBASE=/oraclenosql/lab
$ export KVHOME=$KVBASE/server1/oraclesoftware/kv-2.0.39
$ java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
kv-> configure -name mystore
Store configured: mystore
kv-> plan deploy-datacenter -name "datacenter1" -rf 1 -wait
Executed plan 1, waiting for completion...
Plan 1 ended successfully
kv-> plan deploy-sn -dc dc1 -port 5000 -wait -host server1 -wait
Executed plan 2, waiting for completion...
Plan 2 ended successfully
kv-> plan deploy-admin -sn sn1 -port 5001 -wait
Executed plan 3, waiting for completion...
Plan 3 ended successfully
kv->topology create -name 1x1 -pool AllStorageNodes -partitions 30
Created: 1x1
kv-> plan deploy-topology -name 1x1 -wait
Executed plan 4, waiting for completion...
Plan 4 ended successfully
kv-> show topology
store=mystore  numPartitions=30 sequence=34
  dc=[dc1] name=datacenter1 repFactor=1

  sn=[sn1]  dc=dc1 server1:5000 capacity=1 RUNNING
    [rg1-rn1] RUNNING
          No performance info available

  shard=[rg1] num partitions=30
    [rg1-rn1] sn=sn1

kv->





















So what we have done ?

configure -name mystore
We created a Key Value pair store and we named it as mystore with the configure command.

plan deploy-datacenter -name "datacenter1" -rf 1 -wait
We created a plan which deploys a datacenter and with the replication factor 1. 
Factor one is not advisable for nosql deployment since only one copy of the data is maintained. In the event of failure all the data is lost.
Factor three is a good point to start with a nosql deployment and in this series we will try to achieve it.

plan deploy-sn -dc dc1 -port 5000 -host server1 -wait
We deployed a storage node using the option deploy-sn and to that we passed the data center identification number, the server name and the port number. To know the data center identification number issue with command "show topology" and it will gave us "dc1"

plan deploy-admin -sn sn1 -port 5001 -wait
In addition to the storage node we deployed admin server on this node. For high availability admin server can be configured more and the best to start with is on three nodes.

topology create -name 1x1 -pool AllStorageNodes -partitions 30
We created a topology with name 1x1. To the create topology command we created a storage pool with 30 partitions. Creating a storage pool can also be done with separate command "pool create -name AllStorageNodes". Pay attention to the partitions since it is a one time configuration parameter since this is a demo environment I have given 30 partitions.

plan deploy-topology -name 1x1 -wait
Finally we deployed the topology that we created. Using show topology we can verify the topology.









Popular Posts