11th update of 2022 on BlockTrades work on Hive software

in HiveDevs2 years ago

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last post.

Hived (blockchain node software) work

Refactoring of transaction and block handling code

The primary change associated with this code refactoring is the creation of new full_transaction and full_block objects to avoid unnecessarily repeating previous computations. These objects are wrappers around the old transaction and block data and they also contain metadata about the original data (for example, a full_block can contain: the compressed binary block data, an uncompressed binary version of the block, the unpacked block decoded into a block header and associated transactions, and various metadata such as the block_id).

This week, we did a “sync hived from scratch” benchmark with just the above change (none of the changes discussed below yet included) and sync time was down to 23.5hrs (previously took around 48hrs after our p2p optimizations from a while back, and took about a week before that). So, all told, over a 7x speedup so far versus the production version of hived.

We completed the basic refactoring above as of my last report, so this week we focused on further optimizations now available to us after these changes, as discussed below:

Blockchain thread pool added to speed up block/transaction processing

Probably the most significant such optimization, both now and for the long term, is we’ve added a thread pool that can be used to handle any work that doesn’t need access to chainbase (blockchain state data). This thread pool allows us to speedup a variety of computations that previously loaded down the write_queue thread (block compression/decompression, crypto calculations, pre-checks on operations, etc).

These changes will result in very significant performance improvements for replay, sync, and live sync modes. We’re still running or setting up benchmarks to measure sync and live sync speedups, but even in basic replay tests where we expect to see the least improvement, we’ve seen a decent speedup: replay time on an AMD 5950X with a 4xnvme raid is down to 5hrs 54mins (with chainbase stored on the nvme raid, not in memory!).

By default, the thread pool is allotted 8 threads, but the thread pool size can be overridden at hived startup using a new command-line/config file option: --blockchain-thread-pool-size.

New json parser for validating custom_json operations (layer 2 operations)

Separately, we also replaced the json parser used to validate custom_json transactions (>60x faster for json validation). Give the large number of custom_json operations stored in the blockchain, this will reduce CPU load substantially. Further improvements can still be made later: we’re not yet using this parser for the API web server and it would provide significant benefits there as well.

Adding block ids to block_log.index

Currently we don’t directly store block ids in the block_log or block_log.index files, so these had to be dynamically re-computed at run-time, which is computationally expensive (this required block log file to be read, block to be decompressed, hash of block_header to be computed). Retaining these ids in the block_log index will speed up several places where such data is required. The “first round” (and maybe only round, depending on performance results) of this task is expected to be completed tomorrow, then we’ll run benchmarks over the weekend.

In the next phase of this task, we’ll begin benchmarking this new version of the code and experiment with further optimizations.

Further optimization of OBI (one-block irreversibility) protocol

The optimizations to the OBI protocol are mostly done, but the dev for this work is currently tied up with the refactoring of the transaction and block handling code (task discussed below), so it still needs to be fully completed tested, but I don’t expect this task to take long once it resumes. Based on above rapid progress in blockchain processing optimizations, I believe we should be able to resume and finish optimizations to OBI by sometime next week.

Hived tests

We continued creating tests for the latest changes, and identified another easily fixed bug in the new code for transaction serialization (problematic code generated two copies of the asset for fee for witness update operation).

Hive Application Framework (HAF)

We’ve begun re-examination of scripts for backup of HAF database and HAF apps in light of changes to HAF since scripts were created. I expect this task will be completed soon.

HAF-based hivemind (social media middleware server used by web sites)

We found and fixed a few more issues with HAF-based hivemind last week and we’re testing them now with full syncs and live syncs. I think we’re mostly done here, so remaining big task is to create and test a docker for it to ease deployment.

Some upcoming tasks

  • Allow hived nodes to directly exchange compressed blocks over p2p network.
  • Finish up storage of block ids in block_log.index file.
  • Test, benchmark, and optimize recent changes to blockchain and p2p layers of hived
  • Merge in new RC cost rationalization code (this is blocked by hived optimizations task above because those changes will impact real-world costs of operations).
  • Test enhancements to one-block irreversibility (OBI) algorithm.
  • Finish testing and dockerize HAF-based hivemind and use docker image in CI tests.
  • Test above on production API node to confirm real-world performance improvements.
  • Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode (low priority)
  • Continue testing hived using updated blockchain converter for mirrornet.

When hardfork 26?

Based on progress this week, we’re still on track for a hardfork at the end of July. We’ve updated the HF date in the develop branch to reflect this plan, and if no surprise issues pop up before Monday, it will be time to notify exchanges of the planned hardfork date.

Sort:  

During the 4 years during which Steemit inc was in charge of code management, we never witnessed improvements of this magnitude, quite the opposite.
I'm really impressed with the optimization work that has been done lately. Kudos!

Hardforks are less frequent and sometimes delayed, but I feel much more confident about their stability and the benefits they will provide for the future of Hive.

they weren't calked Stinc for nothing :P

A question was asked in Discord today regarding the RC cost rationalization changes. Is there a rough idea of what impact this will have on the costs of various operations? i.e. will custom_json ops increase in cost more quickly during network congestion?

I don't have a firm answer yet, because RC costs depend on real costs, and code optimizations are changing the real costs. A good example is the new json parser lowered CPU cost of custom_json ops. But probably the biggest change will be that we're now accounting for signature costs, which was previously ignored by RC calculator.

So RC costs will be one of the last things we know, as it will be last change to hived before hardfork.

But I do suspect that costs of custom_json will go up (because of aforementioned signature time being included in costs now) and the RC code does increase costs during network congestion. Making that work optimally though will likely require tuning.

New costs of operations are all over the place. Some became very cheap, some really costly. When it comes to custom_json, based on historical transactions the cost went slightly up, however once the optimized parser is factored in, it might actually go down.
There is new mechanism that measures resource popularity in time window (currently a day), so when you try to do the same as everybody else (f.e. play Splinterlands), the cost might still go up, however if in the same time you try to do something else (f.e. write a comment) the cost might go down, because the state resource that comments require won't be as popular at that moment.
The biggest unknown is change in user behavior. HF26 introduces RC delegation mechanism, which might free up a lot of RC that so far was unused or burned on free account tokens. More used RC means higher level of resource consumption which rises RC cost of everything.

These are exceptional, and mind blowing project, keep up the good works.

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s):

You received more than 1740000 HP as payout for your posts, comments and curation.
Your next payout target is 1745000 HP.
The unit is Hive Power equivalent because post and comment rewards can be split into HP and HBD

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Check out the last post from @hivebuzz:

Hive Power Up Day - July 1st 2022
NFT for peace - Thank you for your continuous support
Support the HiveBuzz project. Vote for our proposal!

This is amazing.

Recently I've been having some challenges with my node on peakd. The favorite list just stopped working, I tried switching to another node which was recommended but it's still the same. Please is there anything I'm wrong, if yes how do I solve this problem?

Hey, thanks for reaching out. Can you check how many accounts are in your list? More than 100?

Also if possible can you reach out on PeakD Discord? https://discord.gg/PFb7V4W

I think I will just reach out on discord.

I don't know how favorites is implemented on peakd, so I can't be sure. If they are stored as custom_json data (which would be my best guess), then it could be node-related, but if so, switching to another node should probably have fixed it, unless there is something unusual about your account (e.g. a lot of transactions).

Hmm, I don't really know what json data is and I've not made too much transactions as well.

How do I know if it's store as json data?

I'll ping @asgarth as he would be the person most likely to know the answer to this.

Thanks 🙏

Okay, thank you for your help.

task discussed below

Sure, what do you want me to discuss? 😁

The json parser is not something I saw mentioned before but that makes a lot of sense.

In the long run, do you see most of the transactions tied to the chain being Custom JSONs as opposed to direct write functions such as blog posts or comments?

I would think this would make for more efficient operations.

Yes, I only started thinking about the json parser recently after some benchmarking showed that it was becoming the next limit on performance after we eliminated some even larger bottlenecks. My original idea was to remove it entirely (one of the other devs was opposed), but then third dev stepped up and pointed out that there were much faster parsers we could use, making us us both happy.

And, yes, this is very important to performance because we can expect that most operations nowadays already are, and will continue to be custom_json, because that's the operation which 2nd layer apps will mostly use (well, we'll probably allow for good custom_binary eventually as well, but that's a story for later).

even the serious people are adopting lame puns. I feel so validated :P

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

The 7th edition of the Hive Power Up Month starts today!
Hive Power Up Day - July 1st 2022
NFT for peace - Thank you for your continuous support
Support the HiveBuzz project. Vote for our proposal!

great job guys!

I think that it is so impressive that you work that goes into maintaining this hive ledger. Despite the fact that we still have so many people who have from day one wanted hive to fall apart and fail, yet It just keeps getting better. I appreciate all your work keeping this chain ticking.

Sounds like everything is going to plan at the moment.

That's brilliant work from the blocktrades team and I have seen Hive going from strength to strength over the past couple of years.

I'm especially looking forward to HAF and the possibilities that it opens up for the growth of the eco-system once it is up and running.

Hopefully we can market it properly to new teams and developers to bring them to hive and start building here.

We have the technology and now we just need to let people know about it.

Congrats on getting this far with the upgrades and expecting lots more to come.


~~~ embed:1542840766128869378 twitter metadata:U3RlZW1hZGlffHxodHRwczovL3R3aXR0ZXIuY29tL1N0ZWVtYWRpXy9zdGF0dXMvMTU0Mjg0MDc2NjEyODg2OTM3OHw= ~~~
The rewards earned on this comment will go directly to the people( @steemadi ) sharing the post on Twitter as long as they are registered with @poshtoken. Sign up at https://hiveposh.com.

A lot of work will be made for this new HardFork 👍 Thank you

Thanks guys, this is totally awesome.

How did you learn the intricacies of the internal blockchain mechanics such as peer-to-peer layer, write queue, block handling, irreversibility and so on? I imagine this is extremely rare knowledge. And there are various approaches out there that various blockchains are trying that don't seem to result in much scalability.

I've been programming for about 40 years now, and most of that time was spent developing "high performance" code where speed and memory consumption was very important. Over time, that allowed me to develop a lot of techniques for estimating code performance, and also analyze and fixing unexpected performance problems.

This experience often gives me an advantage even against very skilled programmers when it comes to figuring out which algorithm will work best in practice in the real world. Knowing what's likely to work and what's likely to fail, without having to go down the wrong path first, is really beneficial when it comes to coding because it saves a lot of time, and time is always at a premium.

As far as blockchain coding goes, my relative experience is much shorter: I started blockchain coding back in 2013. But the fundamentals of high performance coding (e.g. selection of proper data structures, algorithms for working with parallel threads, efficient memory management) are applicable to most of blockchain-related programming , so the primary new things I had to learn were concepts associated with cryptography. And while the low-level math for this is complex, the higher level conceptual understanding needed to employ cryptographic algorithms as a blockchain programmer isn't that hard to gain.

Long story short: while there are certainly high-profile cases where blockchain projects have failed due to cryptographic mistakes, most of the scaling problems are due to a lack of experience in fundamentals of high performance computing. And while "fundamentals" may imply something that is easy to learn, it's not.

Thanks a lot for taking the time to share. Very interesting. So you have a huge amount of experience with developing high-performance applications which enables you to foresee the implications of design decisions in advance, so you don't get stuck in a local maximum. And you see the scaling problems in the blockchain world as mostly stemming from a lack of experience in high-performance computing?

I am reading the explanations in your blog posts with great interest. From this current one, it sounded like you pinpointed a few areas where computation on the same thing was done more than once, or unnecessarily expensive computation (e.g. accessing chainbase) was done for a whole class of things instead of selectively only when needed, or computation could be avoided if some block metadata (such as block ids) is stored. On a conceptual level, all these changes make very good sense, but I don't know the complexity of diagnosing and making them in practice. Still, taking full sync time from a week to a day, and then further to maybe 5 hours, sounds astonishing and perhaps something that previous blockchain devs could have seen if they had familiarity with the whole codebase. Or maybe you are explaining it in a simple way and you save us a lot of the complexity of what's involved.

Do any of these performance optimizations result in lessening of security?

How long do you think it takes to get the "smart contracts" on hive? I know about hivemind and next hard-fork and so on, but could you give some estimate like 6 months from next hf we should be done or in beta, 1 year max for final product or so. It is hard to guess without expertice on this field and I've had too high expectations in the past. Thinking about investing some money.

here my visit friend, thanks for supporting me with your vote, greetings and many blessings

👍💪🇦🇷

You mention a 7x improvement for full nodes, have you done any timing on strictly consensus or seed node spin up improvement?

The 7x speedup is in full sync of a node from no blocks all the way to head block, and it was measured on a normal consensus/seed node.

Note that "full node" is mostly a dead concept now, because full node normally implied an account history node which will be replaced by HAF account history app.

Now there is a "hived node with sql_serializer plugin" that feeds data to postgres, and I haven't done a benchmark for a hived configured this way with new code yet.

I'm running a few more sync benchmarks of various versions of the code this weekend (to determine which optimizations yielded which gains), and near the end I'll probably have time to try a full sync run with hived+sql_serializer (and probably a replay as well in this configuration).