How to build and run a hive node - witness/seed/consensus and account history nodes

in HiveDevs2 months ago (edited)


hive-node-build.jpg

This guide is for building on Ubuntu. It should work on Ubuntu 20. It is not recommended to go with older versions. It is recommended to start with a fresh install.

System requirements

Consensus/witness/seed node:

  • 1TB storage**
  • 32GB RAM*

Account history node:
Note: This type of node will be deprecated and replaced by HAFAH after the hard fork 26. But will still work.

  • 2TB storage**
  • 32GB RAM*

*: The required RAM goes up over time
**: The required storage goes up over time

Building

Install dependencies:

sudo apt-get update

sudo apt-get install git wget

sudo apt-get install \
    autoconf \
    automake \
    cmake \
    g++ \
    git \
    zlib1g-dev \
    libbz2-dev \
    libsnappy-dev \
    libssl-dev \
    libtool \
    make \
    pkg-config \
    doxygen \
    libncurses5-dev \
    libreadline-dev \
    perl \
    python3 \
    python3-jinja2

sudo apt-get install \
    libboost-chrono-dev \
    libboost-context-dev \
    libboost-coroutine-dev \
    libboost-date-time-dev \
    libboost-filesystem-dev \
    libboost-iostreams-dev \
    libboost-locale-dev \
    libboost-program-options-dev \
    libboost-serialization-dev \
    libboost-system-dev \
    libboost-test-dev \
    libboost-thread-dev

Clone the repository and select the version:

git clone https://gitlab.syncad.com/hive/hive
cd hive
git checkout v1.25.0
git submodule update --init --recursive

We selected the tag v1.25.0. You can checkout master for the latest version but it's not recommended for the witness nodes. Because witnesses vote for a hard fork by running the version of that hard fork.
This version will not work after hard fork 26.

mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j$(nproc) hived
make -j$(nproc) cli_wallet
make install

In case the original repository is down, you can use the mirror at https://gitlab.com/hiveblocks/hive

Data directory

Now you can make another folder as a data directory to hold block_log and the config file. I will use /root/hive-data directory.

cd /root
mkdir hive-data
hived -d /root/hive-data --dump-config

The above command will create the config file. We will edit that later depending on the type of node that we need.

It is highly recommended to download block_log to speed up the sync/replay time. The one I linked below is hosted by the great wizard @gtg (make sure to vote him as a witness).

cd /root/hive-data/blockchain
wget https://gtg.openhive.network/get/blockchain/block_log

After downloading the block_log, choose one of the configs from below and move on to the replay section.


Seed node

The config file is located at hive-data/config.ini.
Edit the following parameters in the config file. Add them if they don't exist.

# Plugin(s) to enable, may be specified multiple times
plugin = witness condenser_api network_broadcast_api database_api block_api
# account_by_key is enabled by default - required to use 'get_witness' and 'get_account' in cli_wallet
plugin = account_by_key account_by_key_api
# required for creating and importing Hive 1.24+ State Snapshots
plugin = state_snapshot

shared-file-size = 30G
shared-file-dir = /dev/shm/
p2p-endpoint = 0.0.0.0:2001

The seed node will be open on port 2001.
Ready to replay the node.


Account history node

The exact same config as the seed node but with additional config:

# open the port for RPC connections
webserver-http-endpoint = 127.0.0.1:8091

# edit depending on the load on the RPC node - 64-256 for high traffic
webserver-thread-pool-size = 2

# additional alongside the other plugins
plugin = account_history_rocksdb account_history_api

# The location of the rocksdb database for account history. By default it is $DATA_DIR/blockchain/account-history-rocksdb-storage
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"

# optionally you can comment out this
#p2p-endpoint = 0.0.0.0:2001

Ready to replay the node.


Witness node

To run cli_wallet in offline mode, you will need to build it from the develop branch instead of v1.25.0 or master. Or just replay the node and then come back here and run the cli_wallet without -o.

Run cli_wallet in offline mode and generate a pair of keys.

cli_wallet -o # important notes are above

Then

suggest_brain_key

You should get a pair of keys like this.

{
  "brain_priv_key": "LOGIUM SPANNEL QUETCH LOOPIST NUTGALL LAMINAR PASMO SPRUE TEINDER ECHO WIVE AGREER LOON DIELIKE HIVE MERROW",
  "wif_priv_key": "5HqrGLNrHVuKUHCW5VopRykStsek9WA4tWWhtcx9zjnSUHg38kX",
  "pub_key": "STM8SqT7hHVyzuxkt6yVdUH6rzvCUDqAmaPLZx3UtSDTxPHRMNLFa"
}

Copy both the private and public keys. We will add the public key to our witness account's signing key. And the private key will be added to the config file only to sign the blocks that we produce.

The config file is located at hive-data/config.ini.
Edit the following parameters in the config file. Add them if they don't exist.

# witness account - with double quotes
witness= "username"
# witness account's signing private key - NO double quotes
private-key= 5HqrGLNrHVuKUHCW5VopRykStsek9WA4tWWhtcx9zjnSUHg38kX

# Plugin(s) to enable, may be specified multiple times
plugin = witness condenser_api network_broadcast_api database_api block_api
# account_by_key is enabled by default - required to use 'get_witness' and 'get_account' in cli_wallet
plugin = account_by_key account_by_key_api
# required for creating and importing Hive 1.24+ State Snapshots
plugin = state_snapshot

shared-file-size = 30G
shared-file-dir = /dev/shm/
# open the ports for cli_wallet connections
webserver-http-endpoint = 127.0.0.1:8091
webserver-ws-endpoint = 127.0.0.1:8090

# Remove the following if exists
#p2p-endpoint = 0.0.0.0:2001

Replace the witness and private_key values with your username and the private key generated by cli_wallet.
Ready to replay the node.

*** Come back here after replaying. The guide is available under this section.

After finishing the replay, you need to broadcast an operation on your witness account to add/edit the signing key on your account and enable production.
We can use cli_wallet to do so but there are open source witness tools that can do the same thing. cli_wallet will connect to the local node that we are running.

cli_wallet
# OR
cli_wallet -s ws://127.0.0.1:8090

Add a password, unlock, and import your active private key.

set_password "mypoassword"
unlock "mypassword"
import_key 5jxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Edit the following operation before broadcasting:

update_witness "username" "https://hive.blog/witness-category/@username/my-witness-thread" "STMxxxxxxx" {"account_creation_fee":"3.000 HIVE","maximum_block_size":65536,"hbd_interest_rate":0} true

I think it is very clear what you need to edit. Your username, a link to post (or something similar), and signing public key (generated by cli_wallet).
Note: If you take too long inside cli_wallet, it might time out and not broadcast the operation.

After broadcasting, you can see the signing key at https://hiveblocks.com/@username under the "Authorities" tab on the left side.

Your witness information will be visible on the witnesses lists only after producing a block.


tmux/screen

You will need screen or tmux to keep hived running. I prefer tmux.

sudo apt-get install tmux

A simple cheatsheet for tmux:

tmux # start a new session
tmux attach # open the recently created session
tmux ls # list all the active sessions
tmux attach -t <target> # attach to a certain session

# Commands while attached to a tmux session:
ctrl+b, d # detach from session
ctrl+d # end the session

Replay

First, adjust the size for /dev/shm on your machine:
Note: This is where we hold the shared_memory.bin file. It is growing over time so keep an eye on it. /dev/shm is on RAM. You can edit the config file and choose another location for shared_memory on storage but it will greatly increase the time of replay.

sudo mount -o "remount,size=30G" /dev/shm

Depending on your CPU/RAM/storage speed, this will take a few hours. We open a tmux session by typing tmux and run the following command.

hived -d /root/hive-data --replay-blockchain

You can detach by ctrl+b then d and let it run in the background.
You can tmux attach to attach and see the progress. Remember to not press ctrl+c inside the session, as it will terminate the hived.

When the replay is done, you should see the newly produced blocks in your logs every 3 seconds.

2022-06-14T08:46:38.741642 p2p_plugin.cpp:187            handle_block         ] Got 47 transactions on block 65253208 by deathwing -- Block Time Offset: -258 ms
2022-06-14T08:46:41.789187 p2p_plugin.cpp:187            handle_block         ] Got 64 transactions on block 65253209 by pharesim -- Block Time Offset: -210 ms
2022-06-14T08:46:44.790889 p2p_plugin.cpp:187            handle_block         ] Got 56 transactions on block 65253210 by therealwolf -- Block Time Offset: -209 ms
2022-06-14T08:46:47.749655 p2p_plugin.cpp:187            handle_block         ] Got 58 transactions on block 65253211 by blocktrades -- Block Time Offset: -250 ms
2022-06-14T08:46:50.704510 p2p_plugin.cpp:187            handle_block         ] Got 70 transactions on block 65253212 by stoodkev -- Block Time Offset: -295 ms
2022-06-14T08:46:54.577980 p2p_plugin.cpp:187            handle_block         ] Got 49 transactions on block 65253213 by jesta -- Block Time Offset: 577 ms

Start/Stop

After finishing the replay, you can just stop and start the node again.
ctrl+c one time to gracefully stop the node
hived -d /root/hive-data to start the node

Notes

Note #1: If you reboot the system, you have to replay the node again. Because the shared_memory.bin file is located on the ram at /dev/shm and it will be removed on reboot. You can back up that file after gracefully closing the node before rebooting.

Note #2: If you close the hived node forcefully, most likely you will have to replay the node again. You might need to redownload the block_log file too.

Note #3: If your block_log file gets corrupted by forcefully closing the node or by a crash, instead of redownloading you can trim the end of the file by a few MB or GB and fix it using wget -c https://gtg.openhive.network/get/blockchain/block_log. This will only download the remaining end of the file.

Note #4: You need to stop/start the node after editing the config.ini file for changes to take effect.

Note #5: Make sure the time on your machine is in sync with an online source.

Note #6: It is recommended to not share the IP of your witness node.


And 3 cute kittens as bonus content.

cute-kittens.jpg

I read all the comments. Ask your questions or add anything that I missed in the post.

Sort:  

I'd avoid using root user (and /root dir), one can use a dedicated account for that.
Also, for witness node avoid adding plugins other than essential (i.e. no account_by_name, etc)
It's worth to note however, that a seed node can play a simple API role (broadcast transactions, handle cli_wallet, provide get_block to simple dApps, etc, etc.)

I’d recommend skipping ram and using disk for shared mem these days. Performance isn’t a big difference anymore and you can freely reboot.

With recent optimizations using ram is less critical.

It's a lot more wear on the SSD and I think replay still takes somewhat longer, but I admittedly haven't timed that recently. Still might be worth it for convenience.

It's a lot more wear on the SSD

Nothing significant with modern ssd/nvme and you only replay what 2-3 times a year.

If you reboot the system, you have to replay the node again. Because the shared_memory.bin file is located on the ram at /dev/shm and it will be removed on reboot

It's possible to make a script that moves the file between shm and regular storage when you start and stop the node.

Tnx for this .... bookmarked for future endevours.

Very useful guide. Thanks for sharing.

Thanks for share.

How if i install at digital ocean? or any recommended cheap vps?

You will not be able to get enough disk space with vps to handle a Hive node.

they are literally reselling Contabo VPS, same specs but marginally higher prices, they definitely just resell

I have been trying out this provider

https://billing.dacentec.com/hostbill/index.php?/cart/dedicated-servers/

They offer a lot of disk space with lower end CPUs. I got a 8GB server though it seems to hang on the replays using HIAB so it may not be enough.

spinning disks will be slow, especially when using disk for shared memory.

It is quite slow, and in my experience hangs at HF 19 and then halts. I watched htop and seems not to run out of memory. May be the older version of Ubuntu 18 its running on.

Need to find out if dacentec has servers that can run Ubuntu 20 or 22 on them.

Ideally you want NVME, but SSD will work. Spinning disks will be brutal.

I'm not sure its even possible to keep up. I had that fail at one point. And I'm pretty sure it won't with only 8 GB of RAM for caching.

If you're using shm/ram for the file, spinning disk is fine for storage although some APIs (if accessing block log) will be slower.

In my experience 8GB is not enough, it replays up to HF19 and then the app crashes.

1TB storage**

Wow its a big disk.
Just have $10 at digital ocean.
😊😀

Contabo has an under $40 VPS with 60GB of ram and 1.6TB of SSD
It is shared, but it is cheap enough to empower small witnesses

fantastic guide!

@tipu curate

can you self hide comments? sorta like visual version of no payout?

Very clear and complete with troubleshoots, but definitely requires a very large resource, is this installed on an accessible server or in the cloud? Bookmarked.

Awesome, that is gonna help a lot of people.

The replay may take a few days, the first time I did it it was around one week. Keep that in mind, for anyone that does that for the first time, syncing is slow.

Max 10-12 hours on a typically fast SSD/nvme.

With a pre downloaded snapshot I assume?

"snapshot" would take way less time. I'm talking about replay.

Looks like a really detailed guide, book marked for future!

Ubuntu is a weird recommendation. 😁

imagine a hive witness node coming pre installed to a version of ubuntu

hive linux, many have talked about it ... but imagine ...

a cd rom to mine crypto

Does this count as well? 😁

hive-node-in-a-box.png


I would prefer a pure "from boot to bash prompt" Arch Linux installation. Just my 2c.

It's the most common distro out there

Sure?

2022.06.16.distrowatch.top20.distros.last.6.months.png

(Source: distrowatch.com)


Someone had to take care of the former disappointed Windows users back then. Since its introduction, Ubuntu has been the Windows of the Linux world: centralized, poorly maintained, and marred with countless traps.

I don't consider it recommendable and I refuse at all costs to touch Ubuntu systems.

But no matter. As we all know, tastes are different. 😁

Thank you for this.

Very well explained. Thank you!

if you could go back in time to steem knowing what you know now, what would you have done with a bunch of steem witnesses? How would you have integrated them into various businesses?

and will we ever get bidbots on twitter if elon and cz buy steemit from justin sun lol ?

I am in the dark talking about Ubuntu, this has opened my eyes to making further research. Thanks.

As usual this post is very informative and detailed, I added it to my bookmarks

Thank you for sharing!

!LUV

@mahdiyari, @princekham(1/4) sent you LUV. wallet | market | tools | discord | community | daily

⚠ Starting June 16, LUV will allow 3 or fewer "!" commands in a reply. Details.

Is there a way to run this in a low memory mode? Like using the swap on disk for memory instead of needing 32GB of RAM.

AFAIK you need +8GB of ram (including swap) for building the hived. Other than that, you should be able to put shared_memory on disk.
You should expect a super long replay time.

@mahdiyari I am trying out your guide, though I seem to be hung up at the "Run cli_wallet in offline mode and generate a pair of keys" part.

When I run cli_wallet -o it comes back with unrecognised option.

Below is the full error:

cli_wallet -o terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::program_options::unknown_option> >' what(): unrecognised option '-o'

Can you run cli_wallet at all?

when I run it without the -o switch I get this:

Logging RPC to file: logs/rpc/rpc.log
Starting a new wallet
2849297ms main.cpp:170 main ] wdata.ws_server: ws://localhost:8090
0 exception: unspecified
Underlying Transport Error
{"message":"Underlying Transport Error"}
websocket.cpp:444 operator()

{"uri":"ws://localhost:8090"}
websocket.cpp:673 connect

I think the offline mode is added only in the master branch. You could build from master or just use a web wallet like https://hive.ausbit.dev/witness to setup the witness params. Also, https://hivetasks.com/key-generator can help generate a pair of keys.

I can rebuild from master, do I just follow the guide again but this time rerun git checkout v1.25.0 but with master at the end?

Or do I need to delete everything first, all the folders and files (except the blockchain).

I would say clone in another folder and build only cli_wallet. Yes, git checkout master should do it.

Thanks.
If I am trying to run a witness box - can I just use my own wallet (i.e., my account)?

Sure. Any account.

Thanks.

So how do I make the tokens my node mines go to my account? And how do I monitor the whole mining process?

Congratulations @mahdiyari! You received a personal badge!

Happy Hive Birthday! You are on the Hive blockchain for 5 years!

You can view your badges on your board and compare yourself to others in the Ranking

Check out the last post from @hivebuzz:

Hive Power Month - New Tracking Calendar
Support the HiveBuzz project. Vote for our proposal!

Congratulations @mahdiyari! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Hive Power Month - New Tracking Calendar
Support the HiveBuzz project. Vote for our proposal!