Hivemind dev environment setup guide (according to my findings, it may or may not be the best way)

in #hivemind3 years ago (edited)

image.png

Hello !

As some of you may know, I am working on hivemind (wohoo communities v2 coming soon:tm:)
Setting up a dev environment can be a bit tricky so I figured I'd share a bit of the knowledge I got while doing it.

For the record it's what I found, it may not be the best way to do it, if you know ways to improve it please comment below :)

Postgresql

First we need a postgres database, I think docker is best here because it allows us to have exactly the version we want, and not struggle installing it.

you will need to install docker and docker compose.

one you have that create a docker-compose.yml file with this content

version: '3'
services:
    dbpostgres:
        image: postgres:10
        container_name: hivemind-postgres-dev-container
        ports:
            - "5532:5432"
        environment:
            - POSTGRES_DB=hive
            - POSTGRES_PASSWORD=root

Then in the directory run
docker-compose up -d

then connect to it using your favorite tool (I use datagrip) and install the intarray extension: (you could improve this with a post install script with the dockerfile but I didnt' have the motivation to do it for now)

CREATE EXTENSION IF NOT EXISTS intarray;

Hive setup:

(if hive got another update check https://gtg.openhive.network/get/bin/)
(it's better to build hived yourself but we can assume @gtg is trustworthy)
wget https://gtg.openhive.network/get/bin/hived-v1.25.0
mv hived-v1.25.0 hived && chmod +x hived
mkdir data

execute hived for two seconds to create the directory structure

./hived -d data
then exit and delete the blockchain files
rm -rf ./data/blockchain/*
then we'll download the first 5 million block_log from gtg and put it on the right directory
wget https://gtg.openhive.network/get/blockchain/block_log.5M -P data/blockchain/ && mv data/blockchain/block_log.5M block_log

then update the hived config.ini:

nano data/config.ini

log-appender = {"appender":"stderr","stream":"std_error"}
log-logger = {"name":"default","level":"info","appender":"stderr"}
backtrace = yes
plugin = webserver p2p json_rpc
plugin = database_api
# condenser_api enabled per abw request
plugin = condenser_api
plugin = block_api
# gandalf enabled witness + rc
plugin = witness
plugin = rc

# market_history enabled per abw request
plugin = market_history
plugin = market_history_api

plugin = account_history_rocksdb
plugin = account_history_api

# gandalf enabled transaction status
plugin = transaction_status
plugin = transaction_status_api

# gandalf enabled account by key
plugin = account_by_key
plugin = account_by_key_api

# and few apis
plugin = block_api network_broadcast_api rc_api

history-disable-pruning = 1
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"

# shared-file-dir = "/run/hive"
shared-file-size = 20G
shared-file-full-threshold = 9500
shared-file-scale-rate = 1000

flush-state-interval = 0

market-history-bucket-size = [15,60,300,3600,86400]
market-history-buckets-per-size = 5760

p2p-endpoint = 0.0.0.0:2001
p2p-seed-node =
# gtg.openhive.network:2001

transaction-status-block-depth = 64000
transaction-status-track-after-block = 42000000

webserver-http-endpoint = 0.0.0.0:8091
webserver-ws-endpoint = 0.0.0.0:8090

webserver-thread-pool-size = 8

finally replay your node, and stop at 5 million blocks

./hived --replay-blockchain --stop-replay-at-block 5000000 -d data
this will cook for 10-20 minutes depending on your hardware

Hivemind setup

First, clone hivemind
git clone [email protected]:hive/hivemind.git
then switch to the develop branch (usually better for...developping)
git checkout develop
and then install the dependencies
python3 -m pip install --no-cache-dir --verbose --user -e .[dev] 2>&1 | tee pip_install.log

Then if your hived node is done replaying, you can do your first sync:

./hive/cli.py --database-url postgresql://postgres:root@localhost:5532/hive --test-max-block=4999998 --steemd-url='{"default":"http://localhost:8091"}'

this process will take quite a bit ( ~20 minutes or more depending on your hardware)

Then what I like to do is dump the database so I can get back to this state easily without having to resync everything:

PGPASSWORD=root pg_dump -h localhost -p 5532 -U postgres -d hive -Fc -f dump.dump

and if I want to restore:

PGPASSWORD=root pg_restore -h localhost -p 5532 -U postgres -d hive dump.dump -j12 --clean

Finally if you want to test some specific applications, mocks are your friends ! look into the mock_data folder for examples.

In order to do a sync and add the mock data you can do this:

./cli.py --database-url postgresql://postgres:root@localhost:5532/hive --test-max-block=4999998 --steemd-url='{"default":"http://localhost:8091"}' --mock-block-data-path /home/howo/projects/hivemind/mock_data/block_data/community_op/mock_block_data_community_test.json (replace the path with whatever mock file you have obviously)

Slight note on --test-max-block, it needs to be the height of the highest block of your mocks + 2, because hivemind trails the real blockchain by two blocks, so if you set --test-max-block as 2 and your mocks end at block 2, they won't be picked up.

Sort:  

it's better to build hived yourself but we can assume @gtg is trustworthy

I trust him for life ;-)

Why Postgres 11? Be careful with this creature. ;-)
Ubuntu 18.04 LTS has 10 by default, Ubuntu 20.04 LTS has 12 by default.
I already used Hivemind on both and stumbled upon annoying differences (hopefully most of them are handled in current develop already, and 11 won't be more quirky)

I think I remember that 11 was what @blocktrades (could be wrong though) was running on the nodes and support for 12 wasn't fully finalized yet so I went with what is running on "prod" because I expect to have the most "stable" experience with it :)

We're running 10 on production and we've experimented with 12 and 13 (we've encountered issues with both, but we've been developing workarounds as we go).

kudos on the power up!

Thanks :-)

@blocktrades.com thanks for the contribution.

Is there any minimum system configuration for this setup ?

Hmm not really, I haven't tested it on low spec machines but since we are only synching 5M blocks, unless you really have a toaster for a pc you should be able to work with it.

It's just going to be that some operations (namely sync, dumping and restoring) will suck.

that PURPLE photo looks like a hint for telos

maybe youll be running dapps inside telos DSTOR

nope, hive only

You are doing great!! I really wish I had your level of understanding but take my upvotes

I didn't know you are already working on the next feature for communities, awesome! One thing: I don't know if you saw the discussion about making configurable options for communities instead of having different types of communities. Making it configurable will allow communities to switch as they wish between being closed to new members and being open. So you don't have to start a new community if you want to make such a change, it's just a settings change. Many communities might start out very open, but over time as they get more members they will likely experience more spam or irrelevant posts so they may decide to make posting more restricted (instead of starting a new community). Would it be difficult to implement this kind of community configuration?

One thing: I don't know if you saw the discussion about making configurable options for communities instead of having different types of communities. Making it configurable will allow communities to switch as they wish between being closed to new members and being open.

That's a good idea, could you link me to those discussions ?

Would it be difficult to implement this kind of community configuration?

It's mostly complicated because it's difficult to manage if a post is muted or not when creating it. so effectively, if we were to change the mode to "members only can post" anyone who posted before and isn't a member will see its posts dissapear.

That's a good idea, could you link me to those discussions ?

Here is a link to one discussion on it: https://peakd.com/hive-102930/@borislavzlatanov/re-jarvie-qozrtj (and it's a part of larger post on the future of communities)

if we were to change the mode to "members only can post" anyone who posted before and isn't a member will see its posts dissapear

Well, that may be desired or undesired behavior depending on who you ask. It may be that we start with the more simple implementation and then add complexity and flexibility over time. Since we are creating completely new things (decentralized communities), it may make sense to develop in small steps anyways, so we can see how it goes and adjust.

I trust him for life 👍

Wow it is quite interesting. Keep the good work moving

I don't know what I'm looking at but I definitely wanna jump on this. Probably when I get a new PC. If anyone could take their time of the day to explain what his is that would be so great (:

Congratulations @howo! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Hive Power Up Month - Quick feedback
Feedback from the September 1st Hive Power Up Day
Introducing the Hive Power Up Month - Let's grow every day!

Forgive my lack of knowledge but I need an explanation as I didn’t understand what I read

Interesting information, thank you