26th update of 2021 on BlockTrades work on Hive software

in HiveDevs2 years ago

blocktrades update.png
Below is a list of some of the Hive-related programming issues worked on by BlockTrades team during the past two weeks:

Hived work (blockchain node software)

Hive nodes operating stably throughout past weeks

I recently heard reports that instability in the hive network had led to some failed hive-engine trading transactions, but we haven’t observed any problems in the operation of hived nodes. This can easily be seen in that normal hive transactions haven’t had any similar problems.

Based on my previous review of hive-engine transactions, I believe the problem is that some of these transactions are just too big to fit into partially filled hive blocks because they contain large numbers of proposed trades.

If I’m correct, there’s only two ways I can see to address this problem: increase the size of Hive blocks or reduce the size of these transactions. In the near term, I believe the optimal solution would be for hive-engine servers to use encoding techniques when creating these transactions to reduce their size.

Code to set new HBD haircut ratio and soft limit completed and tested

We completed the code and tests associated with changing the hard and soft limits for HBD supply, so I’ll start collecting final feedback soon on where we should set the new limits for HF26.

We’ve tentatively set the new hard limit to when HBD supply reaches 30% of the virtual supply of Hive (in the current hardfork it is set at 10%).

And we’ve set the both the soft range limit (the starting and ending point where post rewards begin to change from HBD to liquid Hive) at 20%. In other words, post rewards would immediately switch from paying HBD to paying only liquid Hive at 20% instead of gradually shifting between the two currencies as the debt ratio increases.

Added C++ linter to hived build-and-test system (CI)

We modified the docker builder for hived to include clang-tidy linting and fixed resulting lint warning reported by non-optimal copy of a shared pointer: https://gitlab.syncad.com/hive/hive/-/merge_requests/291
Modify CmakeLists to enforce requirement for clang lint tools to be installed:
https://gitlab.syncad.com/hive/hive/-/merge_requests/286

sql_serializer plugin (writes blockchain data to a HAF database)

Our primary hived work this week was focused on testing, benchmarking, and making improvements to the SQL serializer plugin and HAF-based account history app:

sql_serializer performance testing

We re-tested the sql_serializer syncing to head block performance after all the bug fixes were verified and there were no performance regressions.

We’ve setup a bunch of new fast servers in our in-house datacenter to speed up our verification and benchmarking tests and we’ve just started experimenting with how fast we can get the IO systems for these systems on a reasonable budget using software RAID on mid- and high-end 2TB NVME drives (Force mp600, mp600 core, mp600 pro, Samsung 980, and possibly the rather expensive mp600 pro xt) with varying numbers of drives in the RAID array and with various distributions of the table space inside the HAF database between drives.

On our fastest system currently (an AMD5950X128GB with a 4xSamsung 980 RAID0 drive) it took 25594s to reach 58M blocks and restore database indexes. On a similar system with slower IO (an AMD5950X 128GBwith a 3x force mp600 RAID0 drive) it took 27351s.

On both these systems, performance seems to be set by a mix of CPU speed and IO speed, but on systems with more typical drives, disk IO speed will likely be the dominant factor as a full sync to 58M blocks creates a 1.6TB database.

Eventually we’ll also test on some slower systems, but for now we’re testing on our fastest systems when we do full syncs (we only sync to 5M blocks on our slower systems) as our primary testing goal right now is to check for bugs in the code and the tests are time-consuming even on our fastest machines.

Moved image server to much bigger (and much more expensive) server

We were almost out of disk space on our existing image server (only has 36TB of storage with RAID setup) so we’ve been migrating the images to a new server with a 168TB RAID drive. We completed the handoff to the new server this weekend and worked thru some minor issues that resulted (tuning the caches appropriately, fix a rate-limiting issue between the new server and api.hive.blog, etc). If you noticed any issues rendering or uploading an image this weekend, you were likely observing us at work.

During this process we noticed that the cache-busting code added to condenser was negatively impacting Cloudflare’s CDN-based caching (this became more obvious during our performance testing with the new image server because it is located further away from our US office and cache misses were more painful due to network latency), so we asked @quochuy to revert that change (which he has already done and will be deployed to production tomorrow). Once that change is deployed, I expect that avatars on hive.blog will render about 2x faster on average.

We also noticed in this testing that we could potentially reduce the delays incurred by cache misses in the future by creating a simple HAF-based app to locally maintain the link between hive accounts and hive avatars and avoid the current call to database.get_accounts that the image server makes to a remote hived node. In this scenario, a remote HAF server would keep a “hive account” to “hive avatar” mapping table and push the occasional table updates to a local replication of the mapping table on the image server. I think this will make a nice and simple “starter” task for someone looking to create their first HAF app

Hive Application Framework: framework for building robust and scalable Hive apps

Fixing/Optimizing HAF-based account history app (Hafah)

We found and fixed several bugs in the HAF software ecosystem (sql_serializer, hive_fork_manager, account history app) this week. We completed a full sync to head block using 7 sending threads and 7 receiving threads on both the C++-based account history app (took 19730s) and the newer, python-based account history app (took 22021s).

So the C++ version is currently a little over 11% faster at syncing than the python version. Ideally we’ll be able to tune the python version to achieve the same speed as the C++ version, in which case we’ll be able to drop the C++ version and just maintain the python version in the future. And it’s likely that whatever knowledge we gain during that analysis will be useful for future python-based HAF apps as well.

Upcoming work

  • Release a final official version of hivemind with postgres 10 support, then update hivemind CI to start testing using postgres 12 instead of 10. We finished a full sync to headblock of the new version and next @gandalf will deploy it for production testing tomorrow. I don’t expect any problems, so we’ll probably officially recommend API servers to upgrade to the new version this week.
  • Run new tests to compare results between account history plugin and HAF-based account history apps.
  • Simplify build and installation of HAF-based apps and create a repo with HAF components as submodules to track version requirements between HAF components.
  • Finish setup of continuous integration testing for HAF account history app.
  • Test and benchmark multi-threaded jsonrpc server for HAF apps.
  • Finish conversion of hivemind to HAF-based app (didn’t get back to this task last week). Once we’re further along with HAF-based hivemind, we’ll test it using the fork-inducing tool.
  • Continue work on speedup of TestTools-based tests.

Schedule predictions (always a bit dangerous)

At this point I’m fairly confident we’ll be able to release HAF for production use by the end of this month. Since HAF doesn’t impact hived consensus, it can be released whenever it is ready, without requiring a hardfork.

As for hardfork 26 itself, it is still scheduled for December/January time frame (we’ll set an official date early next month). We’ve got two HF-related tasks we still haven’t started on, but I don’t think they will be too difficult: 1) make some simple “low-hanging fruit” improvements to RC calculations (for example, we’ve seen that some operations of varying size don’t get charged based on the byte size of the operation) and 2) allow asset-related transactions to use numeric asset identifiers (aka NAIs) instead of strings as part of the process of deprecating string-based asset identifiers. I’m confident we can complete the first task in time for the hardfork, and I’m reasonably confident we can complete the second task as well.

Sort:  

And we’ve set the both the soft range limit (the starting and ending point where post rewards begin to change from HBD to liquid Hive) at 20%. In other words, post rewards would immediately switch from paying HBD to paying only liquid Hive at 20% instead of gradually shifting between the two currencies as the debt ratio increases.

Not really a fan of this. It's never been shown this actually has any benefit since it dumps more HIVE on the market and may make matters worse. (I mean the 20% vs 30% vs not at all, the latter being what DHF does. The part about switching immediately vs the gradual mix is fine.)

It does directly create more Hive, but on the other hand it is a fixed amount of Hive, unlike HBD which could result in even more Hive being created in a market downturn via conversions. I see the intent of the soft limit as a means of reducing debt production for the chain in a falling market.

It seems like a very small factor given the amount of HBD that would already exist. 20% would exist in HBD at that point. New HBD from rewards is 50% (HBD/HP split) of 50% (author/curator split) of 75% (reward pool) of 8% (current inflation; will be less) per year. So that's something like 1.5% per year added from printing, or 0.125% per month. Doesn't seem useful/significant to me, even in a downturn.

At the same time, we've seen it happen the other way where there is high demand for HBD but printing gets cut off and HIVE given out instead, which could dampen the market and reduce a rise in the price (which would resolve the situation in a better way).

The way I see it, in terms of current supply and demand from rewards, giving out HBD is always going to be better. If people don't want the HBD right now, it'll get sold and/or converted quickly, which ends up being essentially the same as giving out HIVE instead. If people do want the HBD, it constrains the supply of new HIVE and helps the price, which in turn helps the ratio.

It is true, however, that we still have HBD potentially available from DHF (and indeed, this is what happened before when printing was cut off with high HBD demand), which can be used to satisfy HBD demand and absorb HIVE from the market, as well as HIVE->HBD conversions (w/fee), so I guess it doesn't matter that much.

Thank you for your work!

What about proposed by @theycallmedan idea for instant powerdown with percent goes to users who keep the HIVEs staked?

Will it be in the HF26?

It's a good question, but is there still time to discuss exactly how that would work?

I think there is time for that, the main discussion was here:

https://peakd.com/powerdown/@theycallmedan/power-down-time

But I didn't hear about any plans that it will be included in HF at all.

That's why I'm asking.

Thank you for link, will read up on it.

I've been so busy with HAF-related work that this kind of slipped my mind. I'll give it some thought.

@blocktrades If this goes out of the budgeted dev time before hf26, I'm down to push my communities work to later and work on it in parallel to RC delegations. I don't imagine it would be terribly difficult, it's mostly the community aspect of defining the various parameters that will be difficult to figure out.

i don't know much about software, but i think these regular updates for the community are really great:) Keep up the good work

That's good work well designed and structured

great job as always!

HIVE需要与telegram合并

We did some work in the past to add encrypted instant messaging and email to Hive, but it was several years ago, and we would need to clean up that code a bit before it could be used now.

I think this would be a really good idea, hope you have some time to look into it

Great work and we are are proud of you guys.

This sis beautiful so beautiful thanks for this as always

I guess powerdown period won't be tackled in the next HF, kind of sad

We work to keep our letters in the blockchain. Without you, there are no authors here. Good luck guys!

Hello @blocktrades… I have chosen your post about “-26th update of 2021 on BlockTrades work on Hive software-” for my daily initiative to re-blog - vote and comment…
09.jpg
Let's keep working and supporting each other to grow at Hive!...

Moved image server to much bigger (and much more expensive) server

Wow, what about the long term, can images become a problem?

make some simple “low-hanging fruit” improvements to RC calculations

Nice very cool!

About the HBD and increased limit, I really don't like it. If we play this on a larger scale, it can make enormous damage to the hive ecosystem.

Instead of increasing, i would like more onchain use cases for HBD to burn it or lock it up. Wallet creation, Community creation, some special benefits via HBD, and so on.

I would really love to have a good working stable coin onchain. Special to have an alternative to tether and co.

But debt can fuck up a lot. And starting fix the problems with higher numbers end up in MASSIVE FUD. That's a bit uncool IMO.

Wow, what about the long term, can images become a problem?

I would think so. Who is paying to store them once the content referencing them is long forgotten, and the users posting them may even be dead? I think all sites (including centralized) are going to eventually face this, once growth slows and they start to look more closely at cost centers.

Short term, the cost of boosting a server is pretty low. Long term, I think it's a problem.

I think so too social media will be long-term turn into a graveyard. Idk how many people are dead on Facebook, but I would expect the number is increasing day to day. They sell data, so maybe they can refinance it ( also with data from dead people, like visitors to those pages, no idea).

But in a decentralized world, someone needs to pay the bill. Best case the user or the dapp. If the user pay ( own the data). Dapp = Web2 situation.

I could imagine the community can purchase with crating the community image hosting. I think with the combination of a token, that could be really good. Sure hosting would be different to the onchain things, but it could be in one transaction ( not a blockchain transaction).

The biggest problem I see, it is pretty centralized. And hive without pictures would be workout very bad ( blogging in general).

Hi dear @blocktrades!
A friend of mine asked me about these transactions - and I was unsure what to answer them!

image.png

link and source here

Thank you in advance for your time :)

Probably they purchased a Hive power delegation from us in the past, and it is an automated messaging telling them the delegation has expired.

I very much appreciate your response, thank you!

We are always here on Hive. We need people like you to keep us updated. I think it was a perfect blog and I really want to know what's new all the time, so I have to follow you.
Keep posting blogs like this, this is useful for everyone, good luck to you my friend.
count me in @thegolden

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You received more than 1375000 HP as payout for your posts and comments.
Your next payout target is 1380000 HP.
The unit is Hive Power equivalent because your rewards can be split into HP and HBD

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Is not operative!

Awesome update here thanks! Looks like my ultimate goal of doing some Python work for hive might not be too far fetched! lol. I’ve been studying and getting better at Python slowly in order to do some of my own development work on here if I can swing it. Still months off from that I think but certainly a goal of mine.

Otherwise good stuff. The Samsung drive is actually the exact one I bought and I love it! Works very well so far so let’s hope that keeps up.

Looking at smooth’s response about HBD, it’s good to consider both sides but I think the one you propose might be a safe route. If it sucks we can include it in the next fork after that, to change it back.

I'll just ride on your update sir...