Technology That Are Supports By HashiCorp Tools

Maciej
5 min readSep 1, 2020

--

HashiCorp

Speaking of HashiCorp, it is famous for starting with Vagrant and releasing a lot of useful tools for operation such as Packer, Serf, Consul, Terraform, Vault, Nomad.

Each of these tools has a large amount of knowledge from the perspective of operating the service, and in addition to its robustness, various functions, and realistic processing speed, it has many academic fields. Knowledge is incorporated.

This article gives a very brief overview of the technology used in each tool and
a reference to each .

HashiCorp tools

Serf

In Serf, a kind of Gossip(epidemic) protocol called SWIM is used for cluster member management and cluster member failure detection. (SWIM: Scalable Weakly-consistent Infection-style process group Membership protocol).

SWIM / Gossip protocol

Papers on SWIM can be viewed at the following URL.

Gossip protocol is a type of protocol used when information is shared between
nodes. A node does not send a message to all nodes on the network, but to a node that can communicate on the network. By repeating sending messages with a certain probability, information is shared between nodes.

SWIM is a type of Gossip Protocol, and specifically detects a node failure as follows.

⚠️ Figure 1 of the paper (https://github.com/lowesoftware/distributed-computing/blob/master/dsn02-swim.pdf)⚠️

Each node M Maintains a list of cluster members and does the following:

  • Appointed time T TAlive monitoring of other random nodes in the member is performed for each ( ping).
  • ackIf there is a response ( ), do nothing and time againT T Wait for.

If there is no response during the above life and death monitoring, the following processing will be performed. For example, the node Mi From node Mj If there is no response from, it will be as follows.

  • Mi Is random k Select nodes
  • To the selected node Mj Request that life and death be monitored for ( ping-req)
  • The selected node is Mj Alive monitoring
  • Mj If there is a response from the request source Mi Transfer to

By doing so, we are reducing false positives for failure decisions and reducing traffic due to messages in the network.

The actual repository of SWIM implemented by HashiCorp in Golang is the
following repository.

Memberlist is, by adding a further modification to the SWIM of paper are,
fast convergence and of the cluster, seems to be equipped with features such as obtain a high data transmission speed.

Consul

Consul uses the Raft Consensus Algorithm, a consensus distributed algorithm,
to maintain data consistency on the Consul server. Since 0.6, an algorithm based on the distributed network coordinate system called Vivaldi is used to calculate the RTT between nodes and the closest node. Like Serf, SWIM is also used for member management and failure detection.

Raft Consensus Algorithm

Raft Consensus Algorithm is, other than the Consul, InfluxDB and, CoreOS offering of etcd are also used, is a distributed agreement algorithm.

In Raft, each node has one of the following statuses:

Leader

  • Notify all members of the cluster the moment you become a leader
  • Issue heartbeats to all members of the cluster at regular intervals

Follower

  • Being in a cluster with a leader
  • Wait for a heartbeat from the leader at a random time within a fixed time
  • For example, waiting a random time between 100ms and 500ms, etc.
  • If heartbeat does not arrive in time, transition to Candidate status

Candidate

  • Ask other nodes in the cluster to vote
  • Candidate votes on Candidate itself
  • Move to Leader status if you receive more than the quorum votes in the cluster
  • Move to Follower status if another Leader receives the election notice

Example of simple cluster formation from the initial state is as follows.

All nodes start in Follower state

  • Initially the leader is not in the cluster

Wait a random time for each node

The node that has timed out becomes Candidate

  • Ask other nodes in the cluster to vote

The node that received the voting request votes for the Candidate.

Candidate nodes that receive a quorum vote move to Leader status

There is a gif in the link below which may be easier to understand. In the gif example, node S4 timed out the first time, passed through the candidate, and became a leader.

In addition to the above statuses, it also has a value that is managed and incremented by the generation of termthe reader, which allows cluster formation even when one cluster is divided and integrated .

The following repository is actually implemented Raft in Golang in HashiCorp .

Vault

Vault uses Shamir’s Secret Sharing, a decentralized management method for secret data , to manage master keys .

Shamir’s Secret Sharing

  • Papers on Shamir’s Secret Sharing can be viewed at the following URL.

Shamir’s secret sharing method, as the name implies, is a method of dividing secret data into multiple pieces and storing them. You can choose how many
secret data should be distributed, how many of them should be recoverable if they exist, and so on.

Nomad

It seems that Nomad uses Borg/Omega announced by Google and
Sparrow based on Berkeley University’s announcement for cluster scheduling .

More info can be viewed at the following URLs.

Each corresponds to Nomad’s Scheduler Types, it seems that Servicethe Borg-based one is used for Batchthe scheduler type and the Sparrow-based one is used for the scheduler type.

--

--

Maciej
Maciej

Written by Maciej

DevOps Consultant. I’m strongly focused on automation, security, and reliability.

No responses yet