Below is a list of Hive-related programming issues worked on by BlockTrades team during the last week or so:
Hived work (blockchain node software)
We’re continuing to test and make fixes as a precursor to tagging a second release candidate for hived.
We’ve created a new python-based library currently called “testtools” for creating test scenarios for hived and hived’s CLI wallet. We’re replacing the beempy library that was previously used for this purpose in order to accelerate the speed at which the tests execute.
For now, the primary purpose for this python library is testing hived, but it may have more general applicability as a library for communicating with hived, in which case we will rename it later to something more appropriate:
https://gitlab.syncad.com/hive/hive/-/merge_requests/242
We created some unit-test based stress tests for the new recurrent transfers functionality, and initially we found some surprising results in terms of memory usage, but ultimately this was traced to a misconfiguration of the hived instance (it was configured with the deprecated chainbase account history plugin which is known to consume too much memory). With that plugin replaced by the rocksdb-account-history plugin, memory consumption and general performance was fine.
We also fixed some minor issues with the recurrent transfer operation: https://gitlab.syncad.com/hive/hive/-/merge_requests/246
We’ve added a few new network API calls to hived for getting peer count, getting connected peers, adding peers, and setting allowed peers. These functions were primarily added to facilitate testing scenarios (e.g. testing forking logic), but they can be useful to node operators as well: https://gitlab.syncad.com/hive/hive/-/merge_requests/244
We’ve added support for building with boost 1.70 (tested on Ubuntu 18 and 20).
We also modified the fc library to enable a simplified logging syntax. For example, instead of:
ilog(“my variable=${my_variable}”,(“my_variable”,my_variable));
you can simply use:
ilog(“my variable=${my_variable}”,(my_variable));
Note that the older syntax is still required when you need to call a function on the variable to get the value to log. The two syntaxes can be mixed-and-matched in a single log statement.
During our testing of the fix of the longstanding “duplicate operations in account history” bug, we found that this problem could also arise when the value of the last irreversible block was “undone” as part of the shutdown of hived (i.e. when a node operator presses Ctrl-C to shutdown the node). On a subsequent start, with the last irreversible block set to an earlier block, the code would re-add the operations from the already processed blocks. To fix this, we’re making sure the irreversible block number doesn’t get reverted by the database state undo operation anymore.
Once the above issue is fixed and tested in replay mode in conjunction with a full sync of hivemind, we’ll be tagging a second release candidate for the testnet (probably Thursday or Friday). Barring any unexpected issues during testnet testing, I expect that this will be our last release candidate before the official release, based on testing results so far.
Hivemind (2nd layer applications + social media middleware)
Last week we’ve been making final fixes and doing performance tests in preparation for a new release of hivemind for API node operators later this week.
Changing back to using pip for hivemind installation
We recently found that our current installation methodology for hivemind could lead to unexpected package versioning issues, so we’re switching back to using pip (python package installer) and pinning the versions of packages that hivemind uses.
Performance testing and optimization for hivemind
While testing the develop branch of hivemind on our production API node (https://api.hive.blog), we noticed a slowdown in performance of the query bridge_get_ranked_post_by_created_for_tag
(went from average of 64ms to nearly 2s average time).
This problem was ultimately traced down to a lack of sufficient statistics being accumulated for the tags_ids column in the hive_posts table. The collected statistics weren’t sufficient to model the probability distribution of the tags used by posts, which resulted in the query planner selecting an under-performing query plan.
What’s interesting here is that this was a latent performance issue that could have potentially occurred on any given API node if it collected an unlucky statistical set (the problem wasn’t really a master vs develop branch issue). We fixed the issue by increasing the statistics collected for this column from 100 to 1000:
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/503
Hivemind memory consumption
We’re still researching potential ways to decrease the amount of memory consumed by the hivemind sync process over time. We’ve reduced memory consumption some, but more looks possible.
Postgres 13 vs Postgres 10 for hivemind
During our search for a possible solution to the above problem (before we realized increasing statistics was the best solution), we also tried updating our SQL database from postgres 10 with postgres 13, to see if it would select a better query plan.
The database upgrade had no impact on the above problem, but we found another slowdown during hive sync (the indexer that adds data from the blockchain to the database) tied to postgres 13. This problem occurs because the postgres13 planner incorrectly estimates the costs of updating rshare totals during ‘live sync’ and decides to do a just-in-time (jit) optimization which adds 100ms to the query time (update_posts_rshares
normally averages around 3ms).
We confirmed this was the issue by increasing the threshold cost required before the planner was allowed to employ jit optimization (effectively disabling jit usage in the query). In this scenario, performance was just slightly better for postgres 13 than for 10. Once we move to 13, we’ll need to select a long term solution for this issue (either improve the cost estimation or just disable jit for this query), but that’s an issue for a later day.
Functional testing and fixes for hivemind
While working on fixes to community-related API calls, we also improved mock testing capabilities to verify the changes (mock testing allows us to generate “fake” data for testing purposes into an existing hivemind data set).
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/496
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/499
https://gitlab.syncad.com/hive/hivemind/-/merge_requests/501
Modular hivemind (Application framework for Hive apps)
We’re currently building a sample application with the prototype for our modular hivemind framework that will support the account history API. Hopefully we’ll be able to perform a full test of this sample application by sometime next week.
Condenser wallet
We’ve been doing some condenser wallet testing and bug fixing. We fixed a bug in the new feature by @quochuy that generates a CSV file with a user’s transaction history. The fix has been deployed to https://wallet.hive.blog.
https://gitlab.syncad.com/hive/wallet/-/merge_requests/106
Testnet
We’ve had a few brave souls do some testing with the testnet, but I’d like to see a lot more, especially from users supporting Hive API libraries and Hive-based apps.
But everyone is welcome to play around on the testnet and try to break things. As a regular Hive user, you can login with your normal credentials via:
https://testblog.openhive.network (hive.blog-like testing site)
or
https://testnet.peakd.com/ (peakd-like testing site)
You can also browse the testnet with this block explorer: https://test.ausbit.dev/
Going forward, the testnet should be the preferred vehicle for initial testing of Hive apps. And testing new features now, before the hardfork, helps us to identify areas where we may want to make changes to API responses, etc, before there’s an “official” API response that must then be changed later.
Planned date for hardfork 25
I’m still projecting the hardfork will be in the last week of June.
It is good the Testnet is open for regular bods. I tried creating a post and it failed with Cannot Parse asset symbol which is one I have never seen before.
Is there a place to report bugs etc?
That looks like a condenser bug, so you can create an issue here: https://gitlab.syncad.com/hive/condenser/-/issues
You may need to register with gitlab first, not sure.
Thanks for the update.
I look forward to the first week of August! 😜
Unfortunately, if we don't manage the end of June, it likely will be August, as several key Hive programmers will be going on vacation in July.
Vacations are important. So is timing. Everyone scattering shortly after a big change that could potentially introduce unforeseen circumstances, all while knowing in advance I won't know which button to push in the event of a button needing pushing, well; are you sure about this?
Yes, it's not everyone that will be gone, just some. But I want to have several days after HF where everyone is here, just as a precaution.
Had a feeling that part wouldn't be overlooked. I made that mistake once in my life. Never again. Once was enough. And the summer from hell will never be forgotten.
If very little testing gets done by users and apps then you may be correct but doesn't sound like it would be because of blockchain developers that this would be the case.
It does seem like there are some issues for people getting hive keychain properly setup for testnet. I'm waiting until that gets sorted out and a nice video published walking through how to do it
AFAIK the issue with keychain is the modified version has to be accepted by google (it can be installed from github now, but most people don't know how to do that) and the timing of that is out of the dev's control. At least we shouldn't have to worry about this issue on the next HF.
Awesome article!! I'm new to HIVE but let me just say I am loving it!! love the community and everything this blockchain has to offer so far!!
Thanks and welcome to Hive!
That's great. Eagerly awaiting for another updates. Being with hive is a really great experience for me.
Thanks for the update, and including where we go to try and break things.
The testnet is eagerly awaiting your next attempt at mayhem.
mayhem created, peakd test site posted my test site blog to my regular blog, and all them pesky one minute trail votes voted on it already. I let the PeakD folks know in discord what happened. At least it was a semi real type post, just a low effort repeat info type post though.
Good job. would be awesome :D
I just participated in the test, and this is how my wallet is displayed in https://testnet.peakd.com/.
I can't view my Blog or my posts.
I am posting from https://peakd.com/
I hope it helps! Keep up the good work!!@blocktrades and Team
translator
Acabo de participar en la prueba, y así es como se visualiza mi billetera en https://testnet.peakd.com/
No puedo visualizar mi Blog ni mis publicaciones.
El mensaje lo estoy publicando desde https://peakd.com/
Espero sirva de ayuda! Sigan haciendo su buen trabajo!!@blocktrades y Equipo
Hi! Blogs and posts from the mainnet weren't imported into the testnet. The testnet is separate from the mainnet, so any posts you make there won't show up on the mainnet, and vice-versa.
It is very important to get this information. Thank you very much for your answer. What catches my attention, is that the amount of HP that I visualized in the Testnet test network is higher than what I visualize in peakd. I would like to know what is going on?
The testnet is just a testing site. It's not real Hive. Everyone can be "rich" on the testnet, but the testnet coins have no value, because all those balances will get reset every 6 months or so and no one will accept them as a payment.
Oh, it would be great if we were all "rich"! Thanks for clearing up my doubt! Now I feel calmer, back to Reality!
Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :
Your next payout target is 1160000 HP.
The unit is Hive Power equivalent because your rewards can be split into HP and HBD
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Support the HiveBuzz project. Vote for our proposal!
Congratulations @blocktrades! You received a personal badge!
You can view your badges on your board and compare yourself to others in the Ranking
Check out the last post from @hivebuzz:
Support the HiveBuzz project. Vote for our proposal!