Setting Up Your Own HAF Testnet

in HiveDevs2 years ago

haf-testnet.jpg


I recently setup an HAF-based testnet, to use when developing new features for Hive Plug & Play and the other apps I’m working on that leverage HAF. In this post, I share how you can set a testnet up.


Server requirements

I am using a Privex VPS-HIVE-SE server with the following specs:

16G RAM
500G SSD
4 cores
100mbps

These specs allow you to build hived and run your testnet with ample room for storing blocks.


Server setup

I recommend using the root user during the setup. Update the system packages first by running apt update and apt upgrade. Once done, you can go ahead and install the necessary packages.

Install dependencies

sudo apt-get install \
    python3 \
    python3-dev \
    python3-pip \
    postgresql \
    libpq-dev \
    libssl-dev \
    libreadline-dev \
    libpqxx-dev \
    postgresql-server-dev-12 \
    postgresql-server-dev-all

# hived packages

sudo apt-get install \
    autoconf \
    automake \
    cmake \
    g++ \
    git \
    zlib1g-dev \
    libbz2-dev \
    libsnappy-dev \
    libssl-dev \
    libtool \
    make \
    pkg-config \
    doxygen \
    libncurses5-dev \
    libreadline-dev \
    perl \
    python3 \
    python3-jinja2

# Boost packages (hived)

sudo apt-get install \
    libboost-chrono-dev \
    libboost-context-dev \
    libboost-coroutine-dev \
    libboost-date-time-dev \
    libboost-filesystem-dev \
    libboost-iostreams-dev \
    libboost-locale-dev \
    libboost-program-options-dev \
    libboost-serialization-dev \
    libboost-system-dev \
    libboost-test-dev \
    libboost-thread-dev

Install and setup PostgreSQL

Install PostgreSQL using: apt install postgresql

Setup custom DB directory

I had to setup a custom data directory for PostgreSQL. If you need to do this, follow these steps:

  • Create the directory, for example mkdir new_dir

  • Change the directory permissions using chmod 0700 new_dir and chown -R postgres:postgres new_dir

  • Edit the file /etc/postgresql/12/main/postgresql.conf and change the following parameters to use you newly created directory:

data_directory = '/path/to/new_dir'
hba_file = '/path/to/new_dir/pg_hba.conf'
ident_file = '/path/to/new_dir/pg_ident.conf'
  • Initialize the new directory for PostgreSQL: /usr/lib/postgresql/12/bin/initdb -d . -U postgres

Setup access rights

For simplicity, we are going to use a trust setting for our PostgreSQL database, which requires no authentication.

Edit the file /path/to/new_dir/pg_hba.conf with these parameters:

Database administrative login by Unix domain socket

local all postgres trust

IPv4 local connections

host all all 127.0.0.1/32 trust

Install HAF and hived

Clone the repository and prepare for build:

git clone https://gitlab.syncad.com/hive/haf
cd haf
git submodule update --init --recursive
mkdir build
cd build

Build HAF and hived

cmake \
  -DCMAKE_BUILD_TYPE=Release \
  -DBUILD_HIVE_TESTNET=ON \
  -DCLEAR_VOTES=ON \
  -DSKIP_BY_TX_ID=ON \
  -DHIVE_LINT_LEVEL=OFF \
  ..
make
make install

Create a HAF database

  • Create the database
su postgres
psql
CREATE DATABASE haf;
  • Install the hive_fork_manager extension
\c haf
CREATE EXTENSION hive_fork_manager CASCADE;
  • Setup PostgreSQL permissions
CREATE ROLE hived LOGIN PASSWORD 'hivedpass' INHERIT IN ROLE hived_group;
CREATE ROLE application LOGIN PASSWORD 'applicationpass' INHERIT IN ROLE hive_applications_group;

Setup hived

Edit the config.ini file for your hived installation as follows.

p2p-endpoint = 0.0.0.0:2541
webserver-http-endpoint = 127.0.0.2:8751
webserver-ws-endpoint = 127.0.0.1:8090
rc-account-whitelist = initminer,my_account,my_account_two

plugin = sql_serializer webserver p2p json_rpc witness account_by_key reputation market_history
plugin = database_api account_by_key_api network_broadcast_api reputation_api market_history_api condenser_api block_api rc_api wallet_bridge_api
psql-url = dbname=haf user=postgres password=your_password hostaddr=127.0.0.1 port=5432

# name of witness controlled by this node (e.g. initwitness )
witness = "initminer"

# WIF PRIVATE KEY to be used by one or more witnesses or miners
private-key = 

# witnesses
witness = "init-0"
witness = "init-1"
witness = "init-2"
witness = "init-3"
witness = "init-4"
witness = "init-5"
witness = "init-6"
witness = "init-7"
witness = "init-8"
witness = "init-9"
witness = "init-10"
witness = "init-11"
witness = "init-12"
witness = "init-13"
witness = "init-14"
witness = "init-15"
witness = "init-16"
witness = "init-17"
witness = "init-18"
witness = "init-19"
witness = "init-20"
witness = "init-21"

private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 
private-key = 

rc-account-whitelist is where you set the accounts you want to have RC whitelisting. I set initminer to allow for the creation of new accounts (more on that later in this tutorial).

witness = "initminer" sets the witness and private-key = <private-key>sets the witnesses private key, which you get from the log output when you run your hived.

In the #witnesses section, generate new keys and set private keys for all the testnet witnesses. You can use beempy’s keygen for this.

Running hived and creating accounts

Run hived after the steps above and you should get to a point where it is making blocks. To create accounts, you need to use the cli_wallet.

  • Run the cli_wallet:
    • cd /hive/haf/build/hive/programs/cli_wallet
    • ./cli_wallet
  • Create a password using set_password
  • Unlock your wallet using the password set above: unlock my_password
  • Import your initminer account's private key: import_key <private_key>
  • Create a new account: create_account "initminer", "new_account", "{}", true
  • Take note of the public keys shown, as they are needed in the next step
  • Get the private keys associated with the new account by requesting them one by one from the public keys generated above: get_private_key <public_key>. Save the private keys and their roles to use them when signing transactions on the testnet.

Nginx setup

You can setup your testnet’s node to use a reverse proxy with nginx and have your node receive JSONRPC requests via an https domain. Below is an example config file to use. Just make sure the proxy_pass parameter is redirecting to your webserver-http-endpoint value in hived's config.ini.

server {
        listen 80;
        listen [::]:80;

        access_log /var/log/nginx/reverse-access.log;
        error_log /var/log/nginx/reverse-error.log;

        location / {
                    proxy_pass http://127.0.0.2:8751;
  }
}

server {

    listen 443;

    server_name testnet.your_domain.com;

    ssl_certificate /etc/letsencrypt/live/testnet.your_domain.com/fullchain.pem;

    ssl_certificate_key /etc/letsencrypt/live/testnet.your_domain.com/privkey.pem;

    ssl on;

    ssl_session_cache builtin:1000 shared:SSL:10m;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;

    ssl_prefer_server_ciphers on;

    location / {
                proxy_pass http://127.0.0.2:8751;
    }
}

That's it!. Follow the steps above to start your own local HAF based testnet for testing and developing HAF apps.

Thoughts on a publicly available HAF testnet

I’ve been looking into setting up a HAF testnet that’s available for public use. There are some PostgreSQL access considerations to work on, but once I’ve come up with a feasible setup, I can put up a public server for the community.

Let me know what you think about me setting up a public HAF testnet.

Sort:  

I think setting up a public HAF server is problematic, because of the issues around providing write access to schemas in your HAF database (at the very least, you'd need to figure out how to set disk storage quotas for users of the database).

I think the simplest thing to do is setup a regular public testnet, then have each dev group create their own HAF server that is getting data from the public testnet. This would eliminate all the resource allocation issues. Storage requirements for these HAF servers should be small, because the testnet won't have a lot of blocks (relatively speaking).

It might be interesting to provide a docker image for such a HAF server that is already configured to connect up to a public testnet. But probably all that's really needed is a proper config.ini file, then devs could use the config.ini file + the install_haf_server.sh script to setup their server.

Just to clarify the above: each dev group would run a hived server connected to the testnet that would then feed their local HAF database.

Good points. I think that's a good way forward. I'll write a script to automate the steps and a new config.ini to start off.

great post!

@tipu curate