Adding to my last two posts, VMware Blockchain 1.6.0.1 Install on vSphere 7U3 and Deploying a test DAML application on VMware Blockchain 1.6.0.1, I want to go through the process of adding more nodes, or scaling out, the Blockchain deployment. This is not too complicated but not as simple as clicking a button in a UI. As long as you have the original deployment descriptors and output from the original deployment operation handy, you should not have much trouble.
You need to be aware that you should not scale the replica and client nodes at the same time. I’ll provide examples for scaling each type of node separately so you can see how the process varies slightly for each.
Table of Contents
Information about the deployment
This information is specific to my deployment but you will find it useful to have all of this documented and handy as you get your descriptor files in order.
Original Client Node IP Address: 192.168.100.35
Original Replica Node IP Addresses: 192.168.100.31 192.168.100.32 192.168.100.33 192.168.100.34
Original Full Copy Client Node IP Address: 192.168.100.36
New Client Node IP Address: 192.168.100.40
New Replica Node IP Addresses: 192.168.100.37 192.168.100.38 192.168.100.39
Original deployment descriptor: /home/blockchain/descriptors/deployment_descriptor.json
Original infrastructure descriptor: /home/blockchain/descriptors/infrastructure_descriptor.json
Original deployment output: /home/blockchain/output/EPG-blockchain-deployment_2022-01-18T15:24:30.516614
Deploy the operator container
The operator container is a special-use container that runs on a client node and is used specifically for operations related to scaling and backups.
SSH to the operator container and run the following command to get the IMAGE ID
value for the operator image:
sudo docker images |egrep 'IMAGE ID|operator'
You should see output similar to the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
harbor.corp.vmw/vmwblockchain/operator 1.6.0.1.266 ae2a0236ff92 2 months ago 904MB
In this example the IMAGE ID
value is ae2a0236ff92
.
Issue a command similar to the following to encrypt and install the operator private key (created during the initial deployment). Be sure to replace ae2a0236ff92
with your own IMAGE ID value.
docker run -ti --network=blockchain-fabric --name=operator --entrypoint /operator/install_private_key.py --rm -v /config/daml-ledger-api/concord-operator:/operator/config-local -v /config/daml-ledger-api/concord-operator:/concord/config-public -v /config/daml-ledger-api/config-local/cert:/config/daml-ledger-api/config-local/cert -v /config/daml-ledger-api/config-public:/operator/config-public ae2a0236ff92
You should see output similar to the following:
Paste the private key and press ctrl+d on a blank line:
You will need to paste the contents of the private key that you created and converted to a single line (instructions for this were in my post, VMware Blockchain 1.6.0.1 Install on vSphere 7U3).
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIDPQc1Kepy9mhKS3f+kYaXb26dW5fQW3E/x5Ue+oSixhoAoGCCqGSM49
AwEHoUQDQgAEp8KvgIfJsiyG0ttxuGuHYu0k+E6yx3sJdgawvdEGlUpGKmZVO64L
gWKKlkdUWyb+VOylaIwkpycyaxWZrwz5/w==
-----END EC PRIVATE KEY-----
Type CTRL-D and you should see the following output:
Provision successful
Key installed successfully
Issue a command similar to the following to run operator container. Be sure to replace ae2a0236ff92
with your own IMAGE ID value.
docker run -d --network=blockchain-fabric --name=operator -v /config/daml-ledger-api/concord-operator:/operator/config-local -v /config/daml-ledger-api/concord-operator:/concord/config-public -v /config/daml-ledger-api/config-local/cert:/config/daml-ledger-api/config-local/cert -v /config/daml-ledger-api/config-public:/operator/config-public --restart always ae2a0236ff92
The --restart always
is not needed here but I wanted this container to always run automatically. You can remove this part if you don’t have the same intention.
Add a client node
Create a scaleup deployment descriptor file
You will need to create a scaleup deployment descriptor file which will be used by the Orchestrator appliance to create a new client node (VM). This descriptor file needs the blockchain
, clients
, clientNodeSpec
and operatorSpecifications
stanzas to be completed for the scale operation to be functional. Most of the information can largely be copied from the initial deployment descriptor file, deployment_descriptor.json
, in my case.
The following command was run from the Orchestrator appliance to get the blockchainID
, consortiumName
and damlDbPassword
values:
egrep "DAML_DB_PASSWORD|Consortium Name|Blockchain Id" /home/blockchain/output/$(ls -t /home/blockchain/output/ |grep -v json |head -n 1) | grep -v SUCCESS
The following output was returned:
Consortium Name: EPG-blockchain-deployment, Consortium Id: 827bd2dc-4dbb-4025-81f5-7a39426b0655Blockchain Id: b4af2e91-7930-4ed9-9476-b9a21c4cbd7eNode Id: a1b9dae0-a16a-472b-bfd3-fb76c241ffdf, name: https://vcsa-01a.corp.vmw//rest/vcenter/vm/vm-10006, key: DAML_DB_PASSWORD, value: b1o_N4-sU6rtS8S
With these values and the information from the original deployment descriptor file, you can create a scaleup descriptor file similar to the following:
{
"blockchain": {
"consortiumName": "EPG-blockchain-deployment",
"blockchainType": "DAML",
"blockchainId": "b4af2e91-7930-4ed9-9476-b9a21c4cbd7e"
},
"clients": [
{
"zoneName": "test-zone-client",
"providedIp": "192.168.100.40",
"groupName": "Group1",
"damlDbPassword": "b1o_N4-sU6rtS8S"
}
],
"clientNodeSpec": {
"cpuCount": 8,
"memoryGb": 24,
"diskSizeGb": 64
},
"operatorSpecifications": {
"operatorPublicKey": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEp8KvgIfJsiyG0ttxuGuHYu0k+E6y\nx3sJdgawvdEGlUpGKmZVO64LgWKKlkdUWyb+VOylaIwkpycyaxWZrwz5/w==\n-----END PUBLIC KEY-----\n"
}
}
Deploy the new client node
From the Orchestrator appliance, you’ll issue commands similar to the following (and very similar to the same commands used for the initial deployment) to use the scaleup deployment descriptor file to create the new client node:
cd /home/blockchain/orchestrator-runtime/
ORCHESTRATOR_DEPLOYMENT_TYPE=SCALE CONFIG_SERVICE_IP=192.168.110.80 ORCHESTRATOR_DESCRIPTORS_DIR=/home/blockchain/descriptors INFRA_DESC_FILENAME=infrastructure_descriptor.json DEPLOY_DESC_FILENAME=scaleup_client_deployment_descriptor.json ORCHESTRATOR_OUTPUT_DIR=/home/blockchain/output docker-compose -f /home/blockchain/orchestrator-runtime/docker-compose-orchestrator.yml up
You should see output similar to the following:
Recreating orchestrator-runtime_castor_1 ... done
Attaching to orchestrator-runtime_castor_1
castor_1 | wait-for-it.sh: waiting 60 seconds for persephone-provisioning:9002
castor_1 | wait-for-it.sh: persephone-provisioning:9002 is available after 0 seconds
castor_1 | **************************************************
castor_1 | VMware Blockchain Orchestrator(c) Vmware Inc. 2020
castor_1 | **************************************************
castor_1 |
castor_1 | [INFO ] [2022-08-07 23:30:29.746] [thread=background-preinit] [OpID=] [user=] [org=] [function=Version] [message=HV000001: Hibernate Validator 6.2.3.Final]
castor_1 | [INFO ] [2022-08-07 23:30:29.922] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=Starting CastorApplication using Java 11.0.11 on 3f731ec3c4d4 with PID 1 (/castor/castor.jar started by blockchain in /castor)]
castor_1 | [INFO ] [2022-08-07 23:30:29.956] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=No active profile set, falling back to 1 default profile: "default"]
castor_1 | [INFO ] [2022-08-07 23:30:32.376] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=Processed predefined client node sizing file: classpath:client_group_node_sizing_internal.json]
castor_1 | [INFO ] [2022-08-07 23:30:32.380] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=Predefined form-factors: SMALL, MEDIUM, LARGE]
castor_1 | [INFO ] [2022-08-07 23:30:32.407] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=No custom node sizing properties file found at: file:/descriptors/client_group_node_sizing.json]
castor_1 | [INFO ] [2022-08-07 23:30:32.411] [thread=main] [OpID=] [user=] [org=] [function=GenesisJsonProcessor] [message=Custom Genesis file is not provided. Using default Genesis file. ]
castor_1 | [INFO ] [2022-08-07 23:30:32.413] [thread=main] [OpID=] [user=] [org=] [function=GenesisJsonProcessor] [message=Genesis block file does not exist in custom path / default path.]
castor_1 | [INFO ] [2022-08-07 23:30:32.959] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=Started CastorApplication in 3.795 seconds (JVM running for 5.762)]
castor_1 | [INFO ] [2022-08-07 23:30:32.964] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Starting Castor deployer service]
castor_1 | [INFO ] [2022-08-07 23:30:32.965] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Using provided deployment type: SCALE]
castor_1 | [INFO ] [2022-08-07 23:30:33.456] [thread=main] [OpID=] [user=] [org=] [function=ValidatorServiceImpl] [message=Finished provisioning validation, found 0 errors]
castor_1 | [INFO ] [2022-08-07 23:30:33.783] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Generated consortium id: b601efcb-0ee6-4d8d-86ca-a43971a20dfe for consortium: EPG-blockchain-deployment]
castor_1 | [INFO ] [2022-08-07 23:30:33.792] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Secure Store not provided, so defaulting to Secure store of type DISK with default url: file:///config/agent/secrets/secret_key.json]
castor_1 | [INFO ] [2022-08-07 23:30:33.795] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Building sites]
castor_1 | [INFO ] [2022-08-07 23:30:33.937] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Signature save type provided isDISABLE]
castor_1 | [INFO ] [2022-08-07 23:30:39.211] [thread=main] [OpID=] [user=] [org=] [function=ProvisionerServiceImpl] [message=Deployment submitted, request id 86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e]
castor_1 | [INFO ] [2022-08-07 23:35:50.272] [thread=grpc-default-executor-1] [OpID=] [user=] [org=] [function=DeploymentExecutionEventResponseObserver] [message=onNext event received for requestId: 86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e, event: session_id: "86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e"
At this point, a new VM will be provisioned in vCenter in the same location that the original client nodes were deployed. Ultimately, you will see the operation finish with output similar to the following on the Orchestrator appliance:
castor_1 | [INFO ] [2022-08-07 23:35:50.335] [thread=grpc-default-executor-1] [OpID=] [user=] [org=] [function=DeploymentExecutionEventResponseObserver] [message=onNext event received for requestId: 86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e, event: session_id: "86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e"
castor_1 | type: COMPLETED
castor_1 | status: SUCCESS
castor_1 | blockchain_id: "b4af2e91-7930-4ed9-9476-b9a21c4cbd7e"
castor_1 | consortium_id: "b601efcb-0ee6-4d8d-86ca-a43971a20dfe"
castor_1 | blockchain_version: "1.6.0.1.266"
castor_1 | ]
castor_1 | [INFO ] [2022-08-07 23:35:50.351] [thread=grpc-default-executor-1] [OpID=] [user=] [org=] [function=DeploymentExecutionEventResponseObserver] [message=Deployment with requestId: 86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e succeeded]
castor_1 | [INFO ] [2022-08-07 23:35:50.353] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Deployment completed with status: SUCCESS]
orchestrator-runtime_castor_1 exited with code 0
Create a reconfigure deployment descriptor file
A reconfigure deployment descriptor file needs to be created. This descriptor file is used in a subsequent operation so that the entire blockchain deployment can be informed of the new node that was just created. As with the scaleup deployment descriptor file, most of the information will come from the initial deployment descriptor file and the initial deployment output.
The following command was run from the Orchestrator appliance get the nodeId
values for all of the original nodes:
grep 192.168.100 /home/blockchain/output/$(ls -t /home/blockchain/output/ |grep -v json |tail -n 1) |grep -v http | awk '{print $NF" "$3}' | sort -n | sed 's/,//'
The following output was returned:
192.168.100.31 0bedb2f1-8aa1-4642-a922-69d5f23edeb7
192.168.100.32 ad662820-f24e-451d-ac9a-e653af51e3d7
192.168.100.33 9014a45f-6fe0-44be-8eae-6260763e7daa
192.168.100.34 f69f6599-5cc5-4013-87c4-01f8cd620433
192.168.100.35 a1b9dae0-a16a-472b-bfd3-fb76c241ffdf
192.168.100.36 8c7528a4-de07-4ad8-8857-0decb8cf00e0
In this example, 192.168.100.31-34 are the replica nodes, 192.168.100.35 is the client node and 192.168.100.36 is the full copy client node.
The following command was run from the Orchestrator appliance to get the clientGroupId
value:
grep CLIENT_GROUP_ID /home/blockchain/output/$(ls -t /home/blockchain/output/ |grep -v json |tail -n 1) | awk '{print $NF}'
The following output was returned:
08ad38df-02fe-448a-9210-e56f1ca8d814
The command for obtaining the damlDbPassword
value was noted earlier in the Create a scaleup deployment descriptor file section.
The following command was run from the Orchestrator appliance to get the nodeId
value for the new client node:
egrep 'nodeId|VM_IP' /home/blockchain/output/$(ls -t /home/blockchain/output/ |head -n 1)
The following output was returned:
"nodeId": "de6b8356-2793-4383-8899-139818707eeb",
"VM_IP": "192.168.100.40",
With all of this information and the original deployment descriptor file, you can create a reconfigure deployment descriptor file similar to the following:
{
"populatedReplicas": [
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.31",
"nodeId": "0bedb2f1-8aa1-4642-a922-69d5f23edeb7"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.32",
"nodeId": "ad662820-f24e-451d-ac9a-e653af51e3d7"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.33",
"nodeId": "9014a45f-6fe0-44be-8eae-6260763e7daa"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.34",
"nodeId": "f69f6599-5cc5-4013-87c4-01f8cd620433"
}
],
"populatedClients": [
{
"zoneName": "test-zone-client",
"providedIp": "192.168.100.35",
"nodeId": "a1b9dae0-a16a-472b-bfd3-fb76c241ffdf",
"groupName": "Group1",
"clientGroupId": "08ad38df-02fe-448a-9210-e56f1ca8d814",
"damlDbPassword": "b1o_N4-sU6rtS8S"
},
{
"zoneName": "test-zone-client",
"providedIp": "192.168.100.40",
"nodeId": "<get from scale output>",
"groupName": "Group1",
"clientGroupId": "08ad38df-02fe-448a-9210-e56f1ca8d814",
"damlDbPassword": "b1o_N4-sU6rtS8S"
}
],
"populatedFullCopyClients": [
{
"providedIp": "192.168.100.36",
"zoneName": "test-zone-replica",
"accessKey": "minio",
"bucketName": "blockchain",
"protocol": "HTTP",
"secretKey": "minio123",
"url": "192.168.110.60:9000",
"nodeId": "8c7528a4-de07-4ad8-8857-0decb8cf00e0"
}
],
"replicaNodeSpec": {
"cpuCount": 8,
"memoryGb": 24,
"diskSizeGb": 64
},
"clientNodeSpec": {
"cpuCount": 8,
"memoryGb": 24,
"diskSizeGb": 64
},
"fullCopyClientNodeSpec": {
"cpuCount": 8,
"memoryGb": 24,
"diskSizeGb": 64
},
"blockchain": {
"consortiumName": "EPG-blockchain-deployment",
"blockchainType": "DAML",
"blockchainId": "b4af2e91-7930-4ed9-9476-b9a21c4cbd7e"
},
"operatorSpecifications": {
"operatorPublicKey": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEp8KvgIfJsiyG0ttxuGuHYu0k+E6y\nx3sJdgawvdEGlUpGKmZVO64LgWKKlkdUWyb+VOylaIwkpycyaxWZrwz5/w==\n-----END PUBLIC KEY-----\n"
}
}
Reconfigure the Blockchain deployment
From the Orchestrator appliance, you’ll issue commands similar to the following (and very similar to the same commands used for the initial deployment) to use the reconfigure deployment descriptor file to reconfigure the blockchain deployment:
cd /home/blockchain/orchestrator-runtime/
ORCHESTRATOR_DEPLOYMENT_TYPE=RECONFIGURE CONFIG_SERVICE_IP=192.168.110.80 ORCHESTRATOR_DESCRIPTORS_DIR=/home/blockchain/descriptors INFRA_DESC_FILENAME=infrastructure_descriptor.json DEPLOY_DESC_FILENAME=reconfigure_client_scaleup_deployment_descriptor.json ORCHESTRATOR_OUTPUT_DIR=/home/blockchain/output docker-compose -f /home/blockchain/orchestrator-runtime/docker-compose-orchestrator.yml up
You should see output similar to the following:
Recreating orchestrator-runtime_castor_1 ... done
Attaching to orchestrator-runtime_castor_1
castor_1 | wait-for-it.sh: waiting 60 seconds for persephone-provisioning:9002
castor_1 | wait-for-it.sh: persephone-provisioning:9002 is available after 0 seconds
castor_1 | **************************************************
castor_1 | VMware Blockchain Orchestrator(c) Vmware Inc. 2020
castor_1 | **************************************************
castor_1 |
castor_1 | [INFO ] [2022-08-07 23:49:04.002] [thread=background-preinit] [OpID=] [user=] [org=] [function=Version] [message=HV000001: Hibernate Validator 6.2.3.Final]
castor_1 | [INFO ] [2022-08-07 23:49:04.143] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=Starting CastorApplication using Java 11.0.11 on 7aca5f149d68 with PID 1 (/castor/castor.jar started by blockchain in /castor)]
castor_1 | [INFO ] [2022-08-07 23:49:04.209] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=No active profile set, falling back to 1 default profile: "default"]
castor_1 | [INFO ] [2022-08-07 23:49:06.625] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=Processed predefined client node sizing file: classpath:client_group_node_sizing_internal.json]
castor_1 | [INFO ] [2022-08-07 23:49:06.627] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=Predefined form-factors: SMALL, MEDIUM, LARGE]
castor_1 | [INFO ] [2022-08-07 23:49:06.635] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=No custom node sizing properties file found at: file:/descriptors/client_group_node_sizing.json]
castor_1 | [INFO ] [2022-08-07 23:49:06.639] [thread=main] [OpID=] [user=] [org=] [function=GenesisJsonProcessor] [message=Custom Genesis file is not provided. Using default Genesis file. ]
castor_1 | [INFO ] [2022-08-07 23:49:06.640] [thread=main] [OpID=] [user=] [org=] [function=GenesisJsonProcessor] [message=Genesis block file does not exist in custom path / default path.]
castor_1 | [INFO ] [2022-08-07 23:49:07.137] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=Started CastorApplication in 3.651 seconds (JVM running for 5.72)]
castor_1 | [INFO ] [2022-08-07 23:49:07.141] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Starting Castor deployer service]
castor_1 | [INFO ] [2022-08-07 23:49:07.143] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Using provided deployment type: RECONFIGURE]
castor_1 | [INFO ] [2022-08-07 23:49:07.620] [thread=main] [OpID=] [user=] [org=] [function=ValidatorServiceImpl] [message=Finished provisioning validation, found 0 errors]
castor_1 | [INFO ] [2022-08-07 23:49:07.623] [thread=main] [OpID=] [user=] [org=] [function=ValidatorServiceImpl] [message=Finished reconfiguration validation, found 0 errors]
castor_1 | [INFO ] [2022-08-07 23:49:07.825] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Generated consortium id: 8b47949d-6a63-4567-8652-5f20d32e6fd2 for consortium: EPG-blockchain-deployment]
castor_1 | [INFO ] [2022-08-07 23:49:07.832] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Secure Store not provided, so defaulting to Secure store of type DISK with default url: file:///config/agent/secrets/secret_key.json]
castor_1 | [INFO ] [2022-08-07 23:49:07.834] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Building sites]
castor_1 | [INFO ] [2022-08-07 23:49:07.921] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Signature save type provided isDISABLE]
castor_1 | [INFO ] [2022-08-07 23:49:15.866] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Deployment completed with status: SUCCESS]
orchestrator-runtime_castor_1 exited with code 0
This part of the process should only take a minute to complete.
It is very important to note that from the time this task completes, you must complete the remaining steps in under 45 minutes as there is a hard-coded timeout in play that will invalidate the reconfiguration if it is not completed in that time.
Scale the Blockchain deployment
From the original client node, issue the following command to stop all containers except the agent and cre containers:
curl -X POST 127.0.0.1:8546/api/node/management?action=stop
If you see that the cre container is not running in docker ps
output, you can restart it via the docker start cre
command.
From the Orchestrator appliance, you can query the reconfiguration job output file to get the Reconfiguration Id
value:
grep 'Reconfiguration Id' /home/blockchain/output/$(ls -t /home/blockchain/output |grep -v json |head -n 1)|awk '{print $3}'
You should see output similar to the following:
854ebd60-023d-4c65-b523-f93c0f167e2e
From the original client node, issue a command similar to the following to execute the scale operation (with the previously noted Reconfiguration Id
value) against the Blockchain cluster:
sudo docker exec -it operator sh -c './concop scale --clients execute --configuration 854ebd60-023d-4c65-b523-f93c0f167e2e;./concop scale --replicas execute --configuration 854ebd60-023d-4c65-b523-f93c0f167e2e'
You should see output similar to the following:
{"succ":true}
{"succ":true}
You can check the status of the scale operation by issuing the following command:
sudo docker exec -it operator sh -c './concop scale --clients status;./concop scale --replicas status'
You should see output similar to the following:
{"192.168.100.31":[{"UUID":"a1b9dae0-a16a-472b-bfd3-fb76c241ffdf","configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e"}],"192.168.100.32":[{"UUID":"a1b9dae0-a16a-472b-bfd3-fb76c241ffdf","configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e"}],"192.168.100.33":[{"UUID":"a1b9dae0-a16a-472b-bfd3-fb76c241ffdf","configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e"}],"192.168.100.34":[{"UUID":"a1b9dae0-a16a-472b-bfd3-fb76c241ffdf","configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e"}]}
{"192.168.100.31":{"bft":false,"configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e","restart":false,"wedge_status":true},"192.168.100.32":{"bft":false,"configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e","restart":false,"wedge_status":true},"192.168.100.33":{"bft":false,"configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e","restart":false,"wedge_status":true},"192.168.100.34":{"bft":false,"configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e","restart":false,"wedge_status":true}}
The scale operation is successful when all nodes are using the Reconfiguration Id
value noted for their configuration value and all of the replica nodes have a wedge_status
value of true
.
Update the Blockchain nodes
Log in to each replica and full copy client node and issue the following command to stop all containers except the agent container:
curl -X POST 127.0.0.1:8546/api/node/management?action=stop
From the original client node, issue the following commands to create an archive of the current database and copy it to the new client node:
sudo tar cvzf client.tgz /mnt/data/db;sudo chown vmbc:users client.tgz
scp client.tgz 192.168.100.40:
On the new client node, issue the following command to extract the database archive:
sudo tar xvzf client.tgz -C /
On the new client node, issue the following command to update the /config/agent/config.json
file. This change will allow the agent container to pull down the new Blockchain cluster configuration.
sudo sed -i '/\(COMPONENT_NO_LAUNCH\|SKIP_CONFIG_RETRIEVAL\)/d' /config/agent/config.json
You can check the /config/agent/config.json
file on the new client node to validate that the COMPONENT_NO_LAUNCH
and SKIP_CONFIG_RETRIEVAL
lines have been removed.
From the Orchestrator appliance, issue a command similar to the following to get the configurationSession: id
value from the original deployment:
grep 'Deployment Request Id' /home/blockchain/output/$(ls -t /home/blockchain/output/ |grep -v json |tail -n 1) | awk '{print $NF}'
You should see output similar to the following:
94d1199f-bfb9-4859-9105-b660a827de3b
On all nodes (original and new) in the Blockchain cluster, issue a command similar to the following to update the configurationSession: id
value. Be sure to replace the original configurationSession: id
value (94d1199f-bfb9-4859-9105-b660a827de3b
in this example) and the new session id value (the Reconfiguration Id
value noted earlier, 854ebd60-023d-4c65-b523-f93c0f167e2e
in this example) with your own.
sudo sed -i 's/\(\"id\"\: \"94d1199f-bfb9-4859-9105-b660a827de3b\"\|\"id\"\: \"inactive\"\)/\"id\"\: \"854ebd60-023d-4c65-b523-f93c0f167e2e\"/g' /config/agent/config.json
You can check the /config/agent/config.json
file on the nodes to validate that the configurationSession: id:
value is set to the Reconfiguration Id
value noted earlier.
On all nodes (original and new) in the Blockchain cluster, issue a command similar to the following to load the new configuration. Be sure to replace the new configurationSession: id
value (the Reconfiguration Id
value noted earlier, 854ebd60-023d-4c65-b523-f93c0f167e2e
in this example) with your own.
curl -ik -X POST http://localhost:8546/api/node/reconfigure/854ebd60-023d-4c65-b523-f93c0f167e2e
You should see output similar to the following:
HTTP/1.1 201 Created
Date: Sun, 07 Aug 2022 23:57:03 GMT
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Content-Length: 0
On all nodes (original and new) in the Blockchain cluster, issue a command similar to the following to start all containers. The order here is important: replica nodes first, then full copy clients, then clients.
curl -ik -X POST -H \"content-type: application/json\" --data '{ \"containerNames\" : [\"all\"] }' http://localhost:8546/api/node/restart
You should see output similar to the following:
HTTP/1.1 201 Created
Date: Sun, 07 Aug 2022 23:59:07 GMT
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Content-Length: 0
Add a replica node
Create a scaleup deployment descriptor file
You will need to create a scaleup deployment descriptor file which will be used by the Orchestrator appliance to create new replica nodes (VMs). This descriptor file needs the blockchain
, replicas
, replicaNodeSpec
and operatorSpecifications
stanzas to be completed for the scale operation to be functional. Most of the information can largely be copied from the initial deployment descriptor file, deployment_descriptor.json
, in my case.
The following command was run from the Orchestrator appliance to get the blockchainID
, consortiumName
and damlDbPassword
values:
egrep "DAML_DB_PASSWORD|Consortium Name|Blockchain Id" /home/blockchain/output/$(ls -t /home/blockchain/output/ |grep -v json |head -n 1) | grep -v SUCCESS
The following output was returned:
Consortium Name: EPG-blockchain-deployment, Consortium Id: 827bd2dc-4dbb-4025-81f5-7a39426b0655Blockchain Id: b4af2e91-7930-4ed9-9476-b9a21c4cbd7eNode Id: a1b9dae0-a16a-472b-bfd3-fb76c241ffdf, name: https://vcsa-01a.corp.vmw//rest/vcenter/vm/vm-10006, key: DAML_DB_PASSWORD, value: b1o_N4-sU6rtS8S
With these values and the information from the original deployment descriptor file, you can create a scaleup descriptor file similar to the following:
{
"blockchain": {
"consortiumName": "EPG-blockchain-deployment",
"blockchainType": "DAML",
"blockchainId": "b4af2e91-7930-4ed9-9476-b9a21c4cbd7e"
},
"replicas": [
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.37"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.38"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.39"
}
],
"replicaNodeSpec": {
"cpuCount": 8,
"memoryGb": 24,
"diskSizeGb": 64
},
"operatorSpecifications": {
"operatorPublicKey": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEp8KvgIfJsiyG0ttxuGuHYu0k+E6y\nx3sJdgawvdEGlUpGKmZVO64LgWKKlkdUWyb+VOylaIwkpycyaxWZrwz5/w==\n-----END PUBLIC KEY-----\n"
}
}
You can see that this descriptor is adding three new replica nodes. This is the minimum number of replica nodes that can be added.
Deploy the new replica node
From the Orchestrator appliance, you’ll issue commands similar to the following (and very similar to the same commands used for the initial deployment) to use the scaleup deployment descriptor file to create the new replica nodes:
cd /home/blockchain/orchestrator-runtime/
ORCHESTRATOR_DEPLOYMENT_TYPE=SCALE CONFIG_SERVICE_IP=192.168.110.80 ORCHESTRATOR_DESCRIPTORS_DIR=/home/blockchain/descriptors INFRA_DESC_FILENAME=infrastructure_descriptor.json DEPLOY_DESC_FILENAME=scaleup_replica_deployment_descriptor.json ORCHESTRATOR_OUTPUT_DIR=/home/blockchain/output docker-compose -f /home/blockchain/orchestrator-runtime/docker-compose-orchestrator.yml up
You should see output similar to the following:
Recreating orchestrator-runtime_castor_1 ... done
Attaching to orchestrator-runtime_castor_1
castor_1 | wait-for-it.sh: waiting 60 seconds for persephone-provisioning:9002
castor_1 | wait-for-it.sh: persephone-provisioning:9002 is available after 0 seconds
castor_1 | **************************************************
castor_1 | VMware Blockchain Orchestrator(c) Vmware Inc. 2020
castor_1 | **************************************************
castor_1 |
castor_1 | [INFO ] [2022-08-07 23:30:29.746] [thread=background-preinit] [OpID=] [user=] [org=] [function=Version] [message=HV000001: Hibernate Validator 6.2.3.Final]
castor_1 | [INFO ] [2022-08-07 23:30:29.922] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=Starting CastorApplication using Java 11.0.11 on 3f731ec3c4d4 with PID 1 (/castor/castor.jar started by blockchain in /castor)]
castor_1 | [INFO ] [2022-08-07 23:30:29.956] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=No active profile set, falling back to 1 default profile: "default"]
castor_1 | [INFO ] [2022-08-07 23:30:32.376] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=Processed predefined client node sizing file: classpath:client_group_node_sizing_internal.json]
castor_1 | [INFO ] [2022-08-07 23:30:32.380] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=Predefined form-factors: SMALL, MEDIUM, LARGE]
castor_1 | [INFO ] [2022-08-07 23:30:32.407] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=No custom node sizing properties file found at: file:/descriptors/client_group_node_sizing.json]
castor_1 | [INFO ] [2022-08-07 23:30:32.411] [thread=main] [OpID=] [user=] [org=] [function=GenesisJsonProcessor] [message=Custom Genesis file is not provided. Using default Genesis file. ]
castor_1 | [INFO ] [2022-08-07 23:30:32.413] [thread=main] [OpID=] [user=] [org=] [function=GenesisJsonProcessor] [message=Genesis block file does not exist in custom path / default path.]
castor_1 | [INFO ] [2022-08-07 23:30:32.959] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=Started CastorApplication in 3.795 seconds (JVM running for 5.762)]
castor_1 | [INFO ] [2022-08-07 23:30:32.964] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Starting Castor deployer service]
castor_1 | [INFO ] [2022-08-07 23:30:32.965] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Using provided deployment type: SCALE]
castor_1 | [INFO ] [2022-08-07 23:30:33.456] [thread=main] [OpID=] [user=] [org=] [function=ValidatorServiceImpl] [message=Finished provisioning validation, found 0 errors]
castor_1 | [INFO ] [2022-08-07 23:30:33.783] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Generated consortium id: b601efcb-0ee6-4d8d-86ca-a43971a20dfe for consortium: EPG-blockchain-deployment]
castor_1 | [INFO ] [2022-08-07 23:30:33.792] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Secure Store not provided, so defaulting to Secure store of type DISK with default url: file:///config/agent/secrets/secret_key.json]
castor_1 | [INFO ] [2022-08-07 23:30:33.795] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Building sites]
castor_1 | [INFO ] [2022-08-07 23:30:33.937] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Signature save type provided isDISABLE]
castor_1 | [INFO ] [2022-08-07 23:30:39.211] [thread=main] [OpID=] [user=] [org=] [function=ProvisionerServiceImpl] [message=Deployment submitted, request id 86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e]
castor_1 | [INFO ] [2022-08-07 23:35:50.272] [thread=grpc-default-executor-1] [OpID=] [user=] [org=] [function=DeploymentExecutionEventResponseObserver] [message=onNext event received for requestId: 86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e, event: session_id: "86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e"
At this point, new VMs will be provisioned in vCenter in the same location that the original replica nodes were deployed. Ultimately, you will see the operation finish with output similar to the following on the Orchestrator appliance:
castor_1 | [INFO ] [2022-08-07 23:35:50.335] [thread=grpc-default-executor-1] [OpID=] [user=] [org=] [function=DeploymentExecutionEventResponseObserver] [message=onNext event received for requestId: 86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e, event: session_id: "86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e"
castor_1 | type: COMPLETED
castor_1 | status: SUCCESS
castor_1 | blockchain_id: "b4af2e91-7930-4ed9-9476-b9a21c4cbd7e"
castor_1 | consortium_id: "b601efcb-0ee6-4d8d-86ca-a43971a20dfe"
castor_1 | blockchain_version: "1.6.0.1.266"
castor_1 | ]
castor_1 | [INFO ] [2022-08-07 23:35:50.351] [thread=grpc-default-executor-1] [OpID=] [user=] [org=] [function=DeploymentExecutionEventResponseObserver] [message=Deployment with requestId: 86bf4b73-d1b7-46a1-a9c3-e3d6ac51f72e succeeded]
castor_1 | [INFO ] [2022-08-07 23:35:50.353] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Deployment completed with status: SUCCESS]
orchestrator-runtime_castor_1 exited with code 0
Create a reconfigure deployment descriptor file
A reconfigure deployment descriptor file needs to be created. This descriptor file is used in a subsequent operation so that the entire blockchain deployment can be informed of the new node that was just created. As with the scaleup deployment descriptor file, most of the information will come from the initial deployment descriptor file and the initial deployment output.
The following command was run from the Orchestrator appliance get the nodeId
values for all of the original nodes:
grep 192.168.100 /home/blockchain/output/$(ls -t /home/blockchain/output/ |grep -v json |tail -n 1) |grep -v http | awk '{print $NF" "$3}' | sort -n | sed 's/,//'
The following output was returned:
192.168.100.31 0bedb2f1-8aa1-4642-a922-69d5f23edeb7
192.168.100.32 ad662820-f24e-451d-ac9a-e653af51e3d7
192.168.100.33 9014a45f-6fe0-44be-8eae-6260763e7daa
192.168.100.34 f69f6599-5cc5-4013-87c4-01f8cd620433
192.168.100.35 a1b9dae0-a16a-472b-bfd3-fb76c241ffdf
192.168.100.36 8c7528a4-de07-4ad8-8857-0decb8cf00e0
In this example, 192.168.100.31-34 are the replica nodes, 192.168.100.35 is the client node and 192.168.100.36 is the full copy client node.
The following command was run from the Orchestrator appliance to get the clientGroupId
value:
grep CLIENT_GROUP_ID /home/blockchain/output/$(ls -t /home/blockchain/output/ |grep -v json |tail -n 1) | awk '{print $NF}'
The following output was returned:
08ad38df-02fe-448a-9210-e56f1ca8d814
The command for obtaining the damlDbPassword
value was noted earlier in the Create a scaleup deployment descriptor file section.
The following command was run from the Orchestrator appliance to get the nodeId
values for the new replica nodes:
egrep 'nodeId|VM_IP' /home/blockchain/output/$(ls -t /home/blockchain/output/ |head -n 1)
The following output was returned:
"nodeId": "f78ec215-ab82-46d6-9449-73ce78f61343",
"VM_IP": "192.168.100.37",
"nodeId": "56303767-74fb-4f73-b201-002da46b8b56",
"VM_IP": "192.168.100.38",
"nodeId": "0474cd98-b3eb-43b5-881b-db15e2da3f9b",
"VM_IP": "192.168.100.39",
With all of this information and the original deployment descriptor file, you can create a reconfigure deployment descriptor file similar to the following:
{
"populatedReplicas": [
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.31",
"nodeId": "0bedb2f1-8aa1-4642-a922-69d5f23edeb7"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.32",
"nodeId": "ad662820-f24e-451d-ac9a-e653af51e3d7"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.33",
"nodeId": "9014a45f-6fe0-44be-8eae-6260763e7daa"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.34",
"nodeId": "f69f6599-5cc5-4013-87c4-01f8cd620433"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.37",
"nodeId": "<get from scale output>"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.38",
"nodeId": "<get from scale output>"
},
{
"zoneName": "test-zone-replica",
"providedIp": "192.168.100.39",
"nodeId": "<get from scale output>"
}
],
"populatedClients": [
{
"zoneName": "test-zone-client",
"providedIp": "192.168.100.35",
"nodeId": "a1b9dae0-a16a-472b-bfd3-fb76c241ffdf",
"groupName": "Group1",
"clientGroupId": "08ad38df-02fe-448a-9210-e56f1ca8d814",
"damlDbPassword": "b1o_N4-sU6rtS8S"
}
],
"populatedFullCopyClients": [
{
"providedIp": "192.168.100.36",
"zoneName": "test-zone-replica",
"accessKey": "minio",
"bucketName": "blockchain",
"protocol": "HTTP",
"secretKey": "minio123",
"url": "192.168.110.60:9000",
"nodeId": "8c7528a4-de07-4ad8-8857-0decb8cf00e0"
}
],
"replicaNodeSpec": {
"cpuCount": 8,
"memoryGb": 24,
"diskSizeGb": 64
},
"clientNodeSpec": {
"cpuCount": 8,
"memoryGb": 24,
"diskSizeGb": 64
},
"fullCopyClientNodeSpec": {
"cpuCount": 8,
"memoryGb": 24,
"diskSizeGb": 64
},
"blockchain": {
"consortiumName": "EPG-blockchain-deployment",
"blockchainType": "DAML",
"blockchainId": "b4af2e91-7930-4ed9-9476-b9a21c4cbd7e"
},
"operatorSpecifications": {
"operatorPublicKey": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEp8KvgIfJsiyG0ttxuGuHYu0k+E6y\nx3sJdgawvdEGlUpGKmZVO64LgWKKlkdUWyb+VOylaIwkpycyaxWZrwz5/w==\n-----END PUBLIC KEY-----\n"
}
}
Reconfigure the Blockchain deployment
From the Orchestrator appliance, you’ll issue commands similar to the following (and very similar to the same commands used for the initial deployment) to use the reconfigure deployment descriptor file to reconfigure the blockchain deployment:
cd /home/blockchain/orchestrator-runtime/
ORCHESTRATOR_DEPLOYMENT_TYPE=RECONFIGURE CONFIG_SERVICE_IP=192.168.110.80 ORCHESTRATOR_DESCRIPTORS_DIR=/home/blockchain/descriptors INFRA_DESC_FILENAME=infrastructure_descriptor.json DEPLOY_DESC_FILENAME=reconfigure_replica_scaleup_deployment_descriptor.json ORCHESTRATOR_OUTPUT_DIR=/home/blockchain/output docker-compose -f /home/blockchain/orchestrator-runtime/docker-compose-orchestrator.yml up
You should see output similar to the following:
Recreating orchestrator-runtime_castor_1 ... done
Attaching to orchestrator-runtime_castor_1
castor_1 | wait-for-it.sh: waiting 60 seconds for persephone-provisioning:9002
castor_1 | wait-for-it.sh: persephone-provisioning:9002 is available after 0 seconds
castor_1 | **************************************************
castor_1 | VMware Blockchain Orchestrator(c) Vmware Inc. 2020
castor_1 | **************************************************
castor_1 |
castor_1 | [INFO ] [2022-08-07 23:49:04.002] [thread=background-preinit] [OpID=] [user=] [org=] [function=Version] [message=HV000001: Hibernate Validator 6.2.3.Final]
castor_1 | [INFO ] [2022-08-07 23:49:04.143] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=Starting CastorApplication using Java 11.0.11 on 7aca5f149d68 with PID 1 (/castor/castor.jar started by blockchain in /castor)]
castor_1 | [INFO ] [2022-08-07 23:49:04.209] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=No active profile set, falling back to 1 default profile: "default"]
castor_1 | [INFO ] [2022-08-07 23:49:06.625] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=Processed predefined client node sizing file: classpath:client_group_node_sizing_internal.json]
castor_1 | [INFO ] [2022-08-07 23:49:06.627] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=Predefined form-factors: SMALL, MEDIUM, LARGE]
castor_1 | [INFO ] [2022-08-07 23:49:06.635] [thread=main] [OpID=] [user=] [org=] [function=ClientGroupNodeSizingPropertiesProcessor] [message=No custom node sizing properties file found at: file:/descriptors/client_group_node_sizing.json]
castor_1 | [INFO ] [2022-08-07 23:49:06.639] [thread=main] [OpID=] [user=] [org=] [function=GenesisJsonProcessor] [message=Custom Genesis file is not provided. Using default Genesis file. ]
castor_1 | [INFO ] [2022-08-07 23:49:06.640] [thread=main] [OpID=] [user=] [org=] [function=GenesisJsonProcessor] [message=Genesis block file does not exist in custom path / default path.]
castor_1 | [INFO ] [2022-08-07 23:49:07.137] [thread=main] [OpID=] [user=] [org=] [function=CastorApplication] [message=Started CastorApplication in 3.651 seconds (JVM running for 5.72)]
castor_1 | [INFO ] [2022-08-07 23:49:07.141] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Starting Castor deployer service]
castor_1 | [INFO ] [2022-08-07 23:49:07.143] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Using provided deployment type: RECONFIGURE]
castor_1 | [INFO ] [2022-08-07 23:49:07.620] [thread=main] [OpID=] [user=] [org=] [function=ValidatorServiceImpl] [message=Finished provisioning validation, found 0 errors]
castor_1 | [INFO ] [2022-08-07 23:49:07.623] [thread=main] [OpID=] [user=] [org=] [function=ValidatorServiceImpl] [message=Finished reconfiguration validation, found 0 errors]
castor_1 | [INFO ] [2022-08-07 23:49:07.825] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Generated consortium id: 8b47949d-6a63-4567-8652-5f20d32e6fd2 for consortium: EPG-blockchain-deployment]
castor_1 | [INFO ] [2022-08-07 23:49:07.832] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Secure Store not provided, so defaulting to Secure store of type DISK with default url: file:///config/agent/secrets/secret_key.json]
castor_1 | [INFO ] [2022-08-07 23:49:07.834] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Building sites]
castor_1 | [INFO ] [2022-08-07 23:49:07.921] [thread=main] [OpID=] [user=] [org=] [function=DeploymentHelper] [message=Signature save type provided isDISABLE]
castor_1 | [INFO ] [2022-08-07 23:49:15.866] [thread=main] [OpID=] [user=] [org=] [function=DeployerServiceImpl] [message=Deployment completed with status: SUCCESS]
orchestrator-runtime_castor_1 exited with code 0
This part of the process should only take a minute to complete.
It is very important to note that from the time this task completes, you must complete the remaining steps in under 45 minutes as there is a hard-coded timeout in play that will invalidate the reconfiguration if it is not completed in that time.
Scale the Blockchain deployment
From each of the replica and full copy client nodes, issue the following command to stop all containers except the agent and cre containers:
curl -X POST 127.0.0.1:8546/api/node/management?action=stop
If you see that the cre container is not running in docker ps
output, you can restart it via the docker start cre
command.
From the Orchestrator appliance, you can query the reconfiguration job output file to get the Reconfiguration Id
value:
grep 'Reconfiguration Id' /home/blockchain/output/$(ls -t /home/blockchain/output |grep -v json |head -n 1)|awk '{print $3}'
You should see output similar to the following:
854ebd60-023d-4c65-b523-f93c0f167e2e
From the original client node, issue a command similar to the following to execute the scale operation (with the previously noted Reconfiguration Id
value) against the Blockchain cluster:
sudo docker exec -it operator sh -c './concop scale --replicas execute --configuration 854ebd60-023d-4c65-b523-f93c0f167e2e'
You should see output similar to the following:
{"succ":true}
You can check the status of the scale operation by issuing the following command:
sudo docker exec -it operator sh -c './concop scale --replicas status'
You should see output similar to the following:
{"192.168.100.31":{"bft":false,"configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e","restart":false,"wedge_status":true},"192.168.100.32":{"bft":false,"configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e","restart":false,"wedge_status":true},"192.168.100.33":{"bft":false,"configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e","restart":false,"wedge_status":true},"192.168.100.34":{"bft":false,"configuration":"854ebd60-023d-4c65-b523-f93c0f167e2e","restart":false,"wedge_status":true}}
The scale operation is successful when all nodes are using the Reconfiguration Id
value noted for their configuration value and all of the replica nodes have a wedge_status
value of true
.
Update the Blockchain nodes
Log in to each replica and full copy client node and issue the following command to stop all containers except the agent container:
curl -X POST 127.0.0.1:8546/api/node/management?action=stop
From one of the original replica nodes, issue the following commands to create an archive of the current rocksdb database and copy it to the new replica nodse:
sudo tar cvzf replica.tgz /mnt/data/rocksdbdata;sudo chown vmbc:users replica.tgz
scp replica.tgz 192.168.100.37:
scp replica.tgz 192.168.100.38:
scp replica.tgz 192.168.100.39:
On each new replica node, issue the following command to extract the rocksdb database archive:
sudo tar xvzf replica.tgz -C /
On each new replica node, issue the following command to update the /config/agent/config.json
file. This change will allow the agent container to pull down the new Blockchain cluster configuration.
sudo sed -i '/\(COMPONENT_NO_LAUNCH\|SKIP_CONFIG_RETRIEVAL\)/d' /config/agent/config.json
You can check the /config/agent/config.json
file on the new replica nodes to validate that the COMPONENT_NO_LAUNCH
and SKIP_CONFIG_RETRIEVAL
lines have been removed.
From the Orchestrator appliance, issue a command similar to the following to get the configurationSession: id
value from the original deployment:
grep 'Deployment Request Id' /home/blockchain/output/$(ls -t /home/blockchain/output/ |grep -v json |tail -n 1) | awk '{print $NF}'
You should see output similar to the following:
94d1199f-bfb9-4859-9105-b660a827de3b
On all nodes (original and new) in the Blockchain cluster, issue a command similar to the following to update the configurationSession: id
value. Be sure to replace the original configurationSession: id
value (94d1199f-bfb9-4859-9105-b660a827de3b
in this example) and the new session id value (the Reconfiguration Id
value noted earlier, 854ebd60-023d-4c65-b523-f93c0f167e2e
in this example) with your own.
sudo sed -i 's/\(\"id\"\: \"94d1199f-bfb9-4859-9105-b660a827de3b\"\|\"id\"\: \"inactive\"\)/\"id\"\: \"854ebd60-023d-4c65-b523-f93c0f167e2e\"/g' /config/agent/config.json
You can check the /config/agent/config.json
file on the nodes to validate that the configurationSession: id:
value is set to the Reconfiguration Id
value noted earlier.
On all nodes (original and new) in the Blockchain cluster, issue a command similar to the following to load the new configuration. Be sure to replace the new configurationSession: id
value (the Reconfiguration Id
value noted earlier, 854ebd60-023d-4c65-b523-f93c0f167e2e
in this example) with your own.
curl -ik -X POST http://localhost:8546/api/node/reconfigure/854ebd60-023d-4c65-b523-f93c0f167e2e
You should see output similar to the following:
HTTP/1.1 201 Created
Date: Sun, 07 Aug 2022 23:57:03 GMT
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Content-Length: 0
On all nodes (original and new) in the Blockchain cluster, issue a command similar to the following to start all containers. The order here is important: replica nodes first, then full copy clients, then clients.
curl -ik -X POST -H \"content-type: application/json\" --data '{ \"containerNames\" : [\"all\"] }' http://localhost:8546/api/node/restart
You should see output similar to the following:
HTTP/1.1 201 Created
Date: Sun, 07 Aug 2022 23:59:07 GMT
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Content-Length: 0
Redeploy the operator container
After scaling either the client or replica nodes, you will need to redeploy the operator container (if you wish to keep using it) as the Blockchain configuration has changed since it was instantiated.
You can use the same instructions noted at the beginning of this article for creating the operator container after you have run docker rm -f operator
to remove the current operator container.