» Documentation

Create cluster local

Installing GUI and DB nodes

Download the latest 4.* version here choose either x86 or x64 platform MSI to install. During installation in server configuration dialog set your machine name and port 999. After installing, on the machine will be installed: a single node (scimoredb.exe runs as service), administrator/manager GUI and .NET provider.

NOTE: Scimore.Data.ScimoreClientNative.[x86/x64].dll and scimoreagent.exe must be placed together with scimoredb.exe

Next, install 5 more scimoredb nodes. To do so, create c:\data\db1,c:\data\db2, c:\data\db3, c:\data\db4, c:\data\db5 folders.

Then, in command prompt (NOTE: for win7/vista command prompt run as administrator) run following commands:

scimoredb.exe -startup/instance=1 -startup/console=1 -startup/mode=1 -db/syslog="C:\data\db1" -db/systables="C:\data\db1" -db/data="C:\data\db1" -net/endpoint="localhost:1000"  

scimoredb.exe -startup/instance=2 -startup/console=1 -startup/mode=1 -db/syslog="C:\data\db2" -db/systables="C:\data\db2" -db/data="C:\data\db2" -net/endpoint="localhost:1001"  

scimoredb.exe -startup/instance=3 -startup/console=1 -startup/mode=1 -db/syslog="C:\data\db3" -db/systables="C:\data\db3" -db/data="C:\data\db3" -net/endpoint="localhost:1002"  

scimoredb.exe -startup/instance=4 -startup/console=1 -startup/mode=1 -db/syslog="C:\data\db4" -db/systables="C:\data\db4" -db/data="C:\data\db4" -net/endpoint="localhost:1003"  

scimoredb.exe -startup/instance=5 -startup/console=1 -startup/mode=1 -db/syslog="C:\data\db5" -db/systables="C:\data\db5" -db/data="C:\data\db5" -net/endpoint="localhost:1004" 

The commands will create additional db nodes on the local machine. The -net/endpoint="localhost:xxx" parameter indicates the db nodes' connection attributes. When installing node usually connection parameter is machineName:port, such any node from any machine can access it. We use localhost:xxx since all cluster nodes will be running on a single machine (which is only for demo purposes).
Additionally, parameter db/cachePage=x sets how many pages (page size is 8kb) database will cache. Default is 3000, which is low for big tables.

Next, start the new nodes, by executing in command prompt:

net start scimoredb-1 
net start scimoredb-2 
net start scimoredb-3 
net start scimoredb-4 
net start scimoredb-5 

Creating cluster

Open manager's query window connected to localhost, port 999. The managers' query window may return errors (.. node is in stand-by mode..), ignore it, since the most SQL command will fail, if node is not joined to partition group. In query window execute SQL command:

create cluster endpoint('localhost:999') 
go; 
add partition group
(       
add endpoint('localhost:999'),      
add endpoint('localhost:1000')
); 
go; 
commit cluster; 

The command will create a cluster with a single partition containing 2 nodes. To verify cluster state execute:

select * from system.sysinstances; 

,or,

select * from system.syscluster; 

syscluster table is a logging table, that shows all cluster changes. It can also be used to track failover actions, by checking the most recent cluster against previous version.

Altering cluster

Allows adding new partitions, add/remove nodes or splitting existing partition.

Add nodes and partition group

For example, in a single SQL command, we add 2 new nodes to existing partition and, add new partition with another 2 nodes. Execute following SQL:

alter cluster; 
go; 
add partition group repartition with 0 
(      
add endpoint('localhost:1001')    
,add endpoint('localhost:1002')
); 

alter partition group 0
(       
add endpoint('localhost:1003')     
,add endpoint('localhost:1004')
); 
commit cluster; 

add partition group... will create a new partition (new id 1). ...repartition with 0 indicates that partition (0) will split data with the new partition (1), so partitions' (0) hash range will change. b. Splitting partition.

In the current example, the partition (0) manages 4 nodes ( can be seen: select * from system.sysinstances):

localhost:999 
localhost:1000 
localhost:1003 
localhost:1004 

Split partition

We can split the partition (0) to 2 partitions. Where nodes: "localhost:999" and "localhost:1000" , remains on partition (0).

and,

"localhost:1003" and "localhost:1004" nodes will belong to the new partition (2). The SQL command:

alter cluster; 
split partition group 0 
commit cluster; 

Drop node from partition

To remove a node from the cluster, execute drop node SQL command:

alter cluster; 
alter partition group 0 (      
drop endpoint('localhost:1000')); 
commit cluster; 

Drop the node from partition (0). The dropped node goes to stand-by mode and will shut down shortly. If node should be added to cluster again, full node reinstall is required.