×
freebsd - hast
Technologies

Storage Clustering via HAST Framework in FreeBSD 12.0

Storage Clustering via HAST Framework in FreeBSD 12.0

Highly Available Storage (HAST) framework, allows transparent storage of the same data across several physically separated machines connected by a TCP/IP network. This framework act as a RAID1 mirror  and is similar to DRBD storage system used in the GNU/Linux platform. High availability is one of the main requirements in serious business applications and highly-available storage is a key component in such environments.

 

The following are the main features of HAST:

  • mask I/O errors on local hard drives.
  • File system agnostic as it works with any file system supported by FreeBSD.
  • Efficient and quick resynchronization.
  • Increase redundancy.
  • To build a robust and durable storage system.

 

HAST Configuration

In this document, we are going to see how to replicate the zfs volume (zvol) of the same size using HAST. The main source is from https://www.freebsd.org/doc/handbook/disks-hast.html . Configuring HAST is a very simple process that involves a configuration file (hast. conf), hast utility tool (hastctl), and a hastd daemon process that provides data synchronization between ‘Master’ and ‘Secondary’ node.

 

Create a configuration file at /etc/hast.conf with same content in both the machines. I have written a shell script which will automatically configure the hast in master and secondary nodes according to the provided values. Both machines must have same carp IP. To know more about the CARP IP configuration, please refer the FreeBSD article https://www.freebsd.org/doc/handbook/carp.html .

 

Note: Before running the hastconf.sh script, make sure master node RSA public key is added as authorized key in the secondary node. Because master node can transfer file to the secondary node without using password via scp.

 

Script can be downloaded from the git link https://gitlab.com/freebsd1/hast . Instructions on how to run this script is at ‘README’ file.

 

Execute the hastconf.sh script with some values as mentioned below,

# ./hastconf.sh <remote_host> <remote_ip> <carp_ip> <remote_poolname> <vol_path> <vol_size> <hastvol_name>

Where,

  • remote_host => hostname of the secondary node
  • remote_ip => IP address of the secondary node
  • carp_ip => Shared carp IP used for failover
  • remote_poolname => Pool name created already in secondary node
  • vol_path => zvol created in primary node, have to provide the full path of it.
  • vol_size => size of the created zvol in primary node, such as 15G/50G.
  • hastvol_name => User defined name for the hast volume.

 

After this, should be able to see the configuration file at /etc/hast.conf in both the nodes and this file WILL be identical.

 

Failover Configuration with CARP

 

In order to achieve the robust storage system which is resistant to the failure of any given node. So, if the primary node fails, the secondary node is there to take over seamlessly. Check and mount the file system, and continue to work without missing a single bit of data. To achieve this, CARP (Common Address Redundancy Protocol) is used along with carp switch script to automatically make the secondary node as primary during failover.

 

Copy the content of this file https://gitlab.com/freebsd1/hast/-/blob/master/devd-conf.txt in to the file /etc/devd.conf  and place the script ‘carp-hast-switch.sh’ into the directory /usr/local/sbin/ and then restart the devd service.

 

# service devd restart

 

Then run the command “# hastctl status” in both the nodes and check the output:

 

Node1

hast-status

Node2

hast-status

That’s it and now the ZFS volume is successfully replicated via HAST. By using this volume /dev/hast/<hastvol>, one can create an iSCSI storage without any issue.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.