Are custom_json going to be more expensive now?

in HiveDevs2 years ago

The question from title showed up many times recently. If you know the Betteridge's law of headlines then you'd expect the answer to be "no".

Well, not this time :o)

At least the answer is not that simple and it will greatly depend on what you do and - more importantly - when.

Before we start. If you are interested in the topic of upcoming RC changes, you've probably seen this post by @howo. I find the units used on hiveblocks with M misleading, since they are recalculated from raw values (it would be more close to reality if M was replaced with HP, because that's what it really is). I'm going to use raw values in graphs and examples below (where M/G will mean million/billion in raw units).


It is not 77 of 88 million RC by the way, but 77920 of 88390 HP equivalent in RC - the comma is a thousands separator.

If I knew HF26 will be postponed so much, I'd have planned the changes in RC differently and maybe think about some better way of addressing the same problem RC was designed to solve. As it is now, the changes are rather conservative.


Change in amount of daily transactions over time. We were at 59M blocks when I started RC analysis - temporary increase turned out to be long lasting phenomenon.

The only change that could be considered revolutionary is the direct RC delegation mechanism implemented by Howo and described in article linked above. It might turn out to be revolutionary because it opens a lot more opportunities to utilize now idle RC. For example it gives wealthy accounts an alternative to burning their RC for new account tokens and that is big. New account tokens are exceptionally expensive. Over 98% of all RC used in the past was used to finance those tokens. A single token is equivalent of 4600 average comments or 39500 average custom_jsons.

So far the only way for small accounts to get some RC from accounts that have too much of it was to get vesting delegations. That mechanism can be abused far easier than new RC delegations, so it requires greater scrutiny from delegators, it also requires them to give up portion of their power, which means vesting delegations are not really cheap. But the same people won't care about RC nearly as much, since they can't use it all anyway. We can expect vesting delegations to start being replaced by more and bigger RC delegations. That might turn out to be a bad thing though. RC delegations will be cheaper, which increases profits from operating bots. I'll get to potential dangers of the mechanism later.


Resources

Quick reminder. Each transaction consumes some resources from up to 5 different pools. These are:

  • history bytes - consumed by every operation and transaction itself; represents network bandwidth, block log storage, AH storage and now also HAF storage
  • new account tokens - a luxury item among resources, very expensive but only used by one operation
  • market bytes - consumed by market operations; it represents extra cost related to exchanges having to collect those operations on their local AH nodes
  • state bytes - consumed by some operations (and a tiny bit by transaction) that need to store new data
  • execution time - consumed by all operations and transaction itself

Price of each resource depends on many factors:

  • global regeneration rate
  • fill level of resource pool
  • cost curve parameters (constant)
  • (new) resource popularity share

Resource pools are governed by two sets of parameters. The second one mentioned above defines cost. The first set translates to sort of "allowance" for each block. Block-budget is an important derived unit that not only allows comparison between old and new version of RC params, but also to compare different pools. Processing of each block is expected to consume certain amount of resources. If more than that is used, the missing portion is taken from the pools, if not all are used, the excess is added to the pools. But to prevent accumulation of resources over time, the pools also have "half-life". A set percentage of the pools decays with each block. If no resource is used for a long time, after a while pool fills up, fixed value block-budget and percentage based decay become equal putting the pool in the state of equilibrium - when that happens we have full pool. Each pace of consumption equalizes at different level. The lower the level, the more that resource costs - how much more is defined by cost curve parameters.


What changed?

It was obvious that after HF24 optimizations the values of resources consumed for state bytes and execution time were not reflecting reality. On top of that some operations were not reporting proper consumption even for old standards. A lot of significant work was not accounted for in execution time consumption. Those two resource pools were main subject of interest.

When I started analyzing what needs to be updated in RC, I hoped there are set of objective rules that would tell how to make adjustments.

There are none.

So I tried to make it so that at least amounts of consumed resources would be easy to understand.


State bytes.

Values of state byte consumption are now expressed in hour-bytes. One byte of RAM held for one hour is the basic unit. The state objects use RAM not just with the object itself but also with index nodes - this is now reflected. A lot of objects have well defined lifetime. F.e. comment_cashout_object is created on comment_operation and lives for exactly 7 days.
When there are many potential lifetimes, the longest is taken into account. F.e. account_recovery_request_object is created when agent initiates account recovery. It lives at most HIVE_ACCOUNT_RECOVERY_REQUEST_EXPIRATION_PERIOD (one day) which is used to calculate memory consumption, but might be removed earlier with recover_account_operation.
Some objects, once created, live forever, f.e. account_object. If "forever" was applied directly, consumption of hour-bytes would always be infinite. Therefore I've added an assumption that average memory required to run the node will double after 5 years, so all the memory consumed can be considered "covered" after 5 years. Therefore "forever" means 5 years of consumption - it is totally arbitrary value.
Finally some objects have no definite lifetime (so potentially forever), but can be removed with explicit operation, f.e. vesting_delegation_object created when VESTs are delegated with delegate_vesting_shares_operation. Such objects are assumed to last for "half forever", that is 2.5 years - again, totally arbitrary value.

Result of above change is that operations that create temporary objects consume far less state byte resource than operations that create permanent or long lasting objects. Previously the difference was far smaller. The difference is reflected in final cost. RC cost of operations such as account_create_operation greatly increased in comparison to previous version (consumption increased over 6 times). On the other hand cost associated with state byte consumption of operations that only create temporary objects, like vote_operation, decreased considerably (consumption decreased over 3 times). Since transaction_object - used to track "known" transactions - no longer holds whole transaction body, when it comes to custom_json_operation state byte consumption and associated cost became negligible.

One more thing. When new consumption was calculated with old method - just at the end of transaction - some operations that didn't even report any state consumption before, like account_update_operation, suddenly started to report significant consumption. That particular operation looked too important to become exceptionally costly (even free accounts need it). Therefore new mechanism had to be introduced: differential use calculation. It is best to describe it with example. When account_update_operation just replaces old key with new one, no new state is allocated, therefore state consumption is none. However if the operation is used to extend existing authority, then all the new entries are counted and they cost a ton. The mechanism has to check state not only after operation was executed, but also before. Only a handful of operations are considered in that mechanism, just those where overcharging resulting in its absence would be significant (after all use of that mechanism comes with increased execution cost).


Comparison of fill level between old and new state byte pool. The "bump" that starts after 56M block is a lot of new accounts being created. Once the action stopped, new version of state pool recovered, because normal operations consume less state then previously.


Execution time.

Since quite some time has passed between now and when the execution time consumption of operations was measured, computers used for running nodes became faster. So the minimum change had to be to update the values. However the old execution resource pool was designed to cover time of replay, plenty of processes with long execution that are run during live sync were not accounted for. The most prominent is the time spent on calculating public keys from transaction signatures. The pool parameters were changed to focus on time spent in live sync (after all that is what determines if the nodes can keep up the timely processing of blocks). Since more processes are covered by the pool, the budget was increased. Its half-life was reduced to 1 hour (from 15 days), so it reacts on change in traffic much faster (depletes faster but also refills faster).

Calculating time spent on signatures flattened the differences between operations. There are just three operations where execution time of operation is higher than calculation of single signature - for most cases (and over 99% of transactions consist of single operation) execution time associated with signatures became dominant portion of what transaction uses out of related resource pool. In case of custom_json_operation, despite JSON parser optimization (that pretty much removed all execution time spent on the operation itself), the use of execution time resource almost doubled compared to old version where signatures were not accounted for.


Comparison of fill level between old and new execution time pool. New is more "jagged" because it reacts quicker on changes in use, also more of it is now used by transaction signatures which shows after block 56M.


Other pools.

I have not changed other pools except for new mechanism.

At the beginning there was the idea to introduce new pool that would cover Hivemind db storage separately from history bytes. In the end it was shelved, but it exposed a problem. When you wanted to add new RC pool, you had to adjust parameters of all other pools. It is because previously each pool treated global RC regeneration rate (sort of "new money" in the system) as if all that RC was to be spent only on that resource. But how to split global regen between different pools? It could be split evenly, however that would not be natural. When denim becomes new fashion trend, you can expect more money to flow into denim suppliers, while wool suppliers might need to lower their prices. The same principle was applied to RC regeneration rate - it is split between pools proportionally to resource popularity in last period (which in our case means last day - yet another arbitrary value). The mechanism makes sure popular resources can't be hogged by users of a single type of operation. F.e. use characteristic of custom_json_operation is that it uses mostly history bytes and then a bit of execution time (for signatures). There are times when there are so many such operations that they can't even fit in blocks. Eating a lot of history bytes, in the absence of mechanism in question, would mean all other operations would automatically rise in price. So comment_operation that is usually a lot bigger than custom json, would not only have hard time getting into block due to its size, but also be potentially outpriced. The new mechanism helps here. When history bytes become more popular, its pool popularity share increases, but it also means shares of other pools decrease. So comments can actually have their cost lowered during such time, because they also consume significant amount of state, which will drop in price as a result.

Introduction of the above mechanism cut the baseline price by 4, because neutral share is 25% of global regen for each pool (there are 5 pools, however pool of new account tokens is controlled by witnesses, so it was excluded from the changes). It was subsequently multiplied by 3 to compensate. Why 3 and not 4? Because I was trying to make it so overall RC consumption should stay roughly the same as it was before and value of 3 gave results closest to that goal.

The mechanism carries the most influence on final price due to how fast it changes. Let's compare it to other factors.

Global regen is directly proportional to amount of VESTs in the system and over the history of STEEM/HIVE there is really just one episode when it swung significantly (upward when exchanges used their customers' tokens to help certain individual take over STEEM governance and downward with HF23 - HIVE hardfork - and the subsequent power down from users that wanted to stay on STEEM and treated HIVE as free money to be cashed out quickly). By the way, while it steadily increases, we are still 35% below average levels from STEEM times. RC cost of operation is directly proportional to the global regen when other factors are excluded - the more global regen, the more all operations cost.


Global regen change over time.

Fill level of resource pool influences RC cost inversly, that is, when there is twice as much resource available, it costs half. There are some variations in pool parameters, but that is roughly the case. Let's see example of three transactions of the same size (the same use of history bytes = 195B) from times of different fill level of history pool. Their costs per byte normalized to common global regen are the following:

If we use that to calculate cost of the remainder of the history byte pool in each case we get:

  • 6181T or 3090 per point of global regen
  • 6082T or 3041 per point of global regen
  • 6115T or 3057 per point of global regen

No matter how much of the pool is already consumed, the remainder costs roughly the same, which shows inverse proportionality. What it means in practice is that the change in price can be significant with small change in fill level, but only when pool is mostly dry. On top of that, it takes a lot of time to consume or refill the pool (with exception of new execution time pool due to its very short half-life).


Fill levels of (new in case of state and exec) resource pools over time.

Finally the popularity share. Lets see an example of how fast it can influence the price - pairs of similar transactions in vastly different popularity settings:

Above transactions are not from the same block but pretty close. Pool fill levels were 44.61% history, 83.93% state, 57.85% exec, global regen at 1863G.

Now let's move just a bit shy of 12 days. We have another pair of almost identical transactions (the second one is actually a bit smaller than its equivalent from previous pair):

Pool fill levels are now 51.77% history, 75.55% state, 83.93% exec, global regen at 1871G.

Global regen is almost the same in both cases (0.4% difference), change in fill level of history pool changed its resource price modifier from +124.16% down to +93.16% (16% difference), state pool from +19.15% up to +32.36% (11% difference), exec pool from +72.86% down to +19.15% (45% difference). The changes in fill level affect the price somewhat but nowhere near enough. After all the cost of custom_json_operation dropped to less than half, in the same time the cost of create_claimed_account_operation more than trippled.

The difference comes from changes in resource popularity share. For first pair it was 58.98% / 15.01% / 24.32% (with rest on market bytes, new account tokens always use 100% share modifier), the second pair used 34.12% / 46.42% / 15.79%. It is the sharp increase in state byte popularity that trippled cost of operation with heavy use of state bytes. As you can see the share for other resources dropped as a result.


Change in popularity share over time.

Price change due to popularity share depends on how much particular operation uses resources that gain/lose popularity. F.e. comment_operation is generally heavy on state byte usage, unless it is an exceptionally long article. vote_operation is mostly history bytes, with solid 20% in exec time. custom_json_operation has similar characteristic, except while votes are all the same size, jsons can be small or big and the big ones are even more heavy on the history bytes. Transfers and similar market operations take half of their cost from market bytes.


Change in share of costs over time.


Summary.

  • Social operations such as votes and comments (but also follow custom_jsons) are on average slightly less expensive. Only most recently votes became around 40% more expensive (but in the same time much more costly comments dropped by 20%)


  • Market related operations, with exception of some rarely used and withdraw_vesting_operation (execution cost now covers all steps of withdrawal) are even 50% cheaper. But they were cheap to begin with.

  • Account maintenance operations like account_update_operation are on average 80% more expensive, but that heavily depends on what you do - adding more authorities is expensive, changing existing authorities is not.

  • Governance such as voting for witnesses or updating proposals is roughly the same as before. Only creation of long lasting proposals is more expensive.

  • All forms of account creation became a lot more expensive, but most recently it is just two times more.

  • New account tokens - no change (by design).

  • Custom jsons (with exception of follow that were charged extra, but not anymore) became on average 40% more expensive, but most recently more like 70%. Due to popularity share of history bytes already at over 60% (and cost share of that resource at over 90%), there is not much room for further growth in price. The only event when it could still grow a lot more would be if max block size was increased and history pool potentially running dry.


Future.

One significant change is on hold, waiting for witnesses to actually allow RC to start effectively limiting average users. Currently due to max block size being only 64kB, the transaction has a lot more chance to not get to the block due to congestion than due to user not having enough RC (although there is possibility that survivorship bias is at play here - we really have no tools to observe frequency of users struggling with lack of RC). Once the max size is increased, we might see more resources being consumed, RC pools drying up and RC costs skyrocketing. If that happens, the change would be to tie budget of history and execution pool to the max size of block. But real data is needed first for analysis of such scenario.

And here I go back to RC delegations. If above indeed happens (RC costs skyrocketing), normal users, especially new ones, will be priced out. For bot operators it won't be a problem to acquire more RC delegation. But for people that just start their journey with Hive it would suddenly add another big step on the learning curve that might be too hard to deal with and discourage them.
Alternative is even worse - whales giving out free RC delegations proactively. It would mean great amount of new RAM consumption.


TL:DR

Most users are not affected simply because they were not running dry on their RC to begin with. The most affected are bots of any level of wealth and free accounts that only use custom_jsons (if you engaged in social activities your account would quickly grow out of "free" qualifier).

If your only activity is custom_json_operation, especially if it aligns with various popular events on Splinterlands, then your RC costs will rise by 50% on average, but double most recently. If your activity revolves around creating new accounts, then your RC cost will at least double, but at times it might even rise 10 times. If your activity is mixed then you won't be affected. If you mainly do social related operations then your RC costs will drop. If your activity is trading (but not with use of custom_jsons) then your RC costs will drop significantly. This is all only due to massive increase in use of custom_jsons. If situation changes, if users start to blog more, or trade more, if average custom operation gets smaller due to changes in apps, then the RC costs will adapt pretty fast.


If you have any questions regarding RC, feel free to ask in comments. I tried not to repeat too much of what was already described in articles posted by original developers of the system around HF20 but it is obvious that not everyone read them :o)

Sort:  
 2 years ago  

What do you think about customizable RC costs set by users? Imagine fees on ETH but instead, it is RC as a fee. Essentially you are bidding to be the first included in the block.

System such as in BTC/ETH is like an auction - wealthy always win. I very much prefer current system, where as long as one has the money (RC) even the poor guy can buy at the shop at predictable price (transact on blockchain) and it doesn't matter that the guy behind him in the queue is bathing in cash.

There are a few things mixed up here. I'll leave ETH out of it because that is somewhat more complicated (the block limit is variable) and it is close enough to just stick with BTC for simplicity, but with BTC the auction is the ONLY mechanism for allocating space, meaning "congestion" is effectively normal. When demand exceeds supply, the only solution is for the fees to become very high at which point the wealthy can push others out.

With RC, the pools also limit usage dynamically (if calibrated appropriately) meaning RC costs will increase and usage will decrease (because more and more accounts will be locked out of transaction periodically, or will need to reduce their usage to avoid hitting the cap). Therefore, any congestion should be only a short term transitional phenomenon (vs periods in BTC when fees have remained high for months, and could in theory be permanent), and much more muted in magnitude.

That being said, the wealthy can still win out with RC because if demand is high relative to resources, RC costs will go up and you would need a lot of HP to do much of anything.

I think some sort of priority mechanism either a multiplier or additive bonus makes sense. Some transactions have more of a need to get into a block quickly (or at all) compared to others.

It makes sense to introduce priority queue for specific types of transactions. F.e. in time of high traffic it would be preferable for witnesses to be able to include transactions that change blockchain parameters (like increasing max block size), because they can fix congestion that way. Also some operations are time sensitive, like account recovery or canceling "decline voting rights". However I'm against allowing wealthy users to use their basically unlimited RC to outbid normal users from transacting on blockchain. If we ever do something like this, it would have to be fee that burns actual Hive, not RC.

Also, there's no consensus on which transactions are included so witnesses could certainly include their own transactions to e.g., change parameters, regardless of RC or other congestion.

They could, but there is a bit of difference between having to mess with the code and having the mechanism already coded in :o)

Fee seems fine although I'm not sure it should be HIVE directly, since 0.001 HIVE is kind of significant. For very cheap transactions with only modest congestion, you might want an even smaller fee. One way to do that would be to burn HIVE to receive a bunch of more granular fee credits you could use later.

I'm not sure it is that significant. New account tokens replace 3 HIVE fee and most recently they cost over 10T RC. Also most recently average custom_json is 400M RC. So a fee of 0.001 HIVE for a custom_json would be like paying only ten times more RC.

The price of HIVE could go way up!

Perhaps one could argue that with the price of HIVE going up a lot that usage of the blockchain would also go up and fees for using it also going up (even if only during congestion) is not a problem, but I don't think that is clear. They're not necessarily tied together.

We could use HBD to pay the fee though, 0.001 USD will likely never be a lot.

Mega helpful post. Thank you.

I need time to digest/re-read some stuff, but this means most side-chain projects (HE and Honeycomb ie, which are 100% JSON-based) accounts will need to increase their RC vests substantially. On the other side, if one would centralize more diverse types of pool calls on the same account, that might become beneficial due to the "resource popularity share", am I understanding it right?

The new mechanism helps here. When history bytes become more popular, its pool popularity share increases, but it also means shares of other pools decrease. So comments can actually have their cost lowered during such time, because they also consume significant amount of state, which will drop in price as a result.

This recalls me of how Slurm "fair tree fairshare" works. Very interesting to see some of these problems being shared across completely different things that aim to somehow solve the same problem: Resource consumption "fair" schedule.

Super excited to finally understand more of this stuff =) Really overwhelming how much thought was put behind all this. Really amazes me 😎

Great to read on how things happen under the hood, very informative.

Do you think a stablecoin could be created that is based on the cost of computation? As you have measured the cost of performing various operations, Resource Credits start looking a little bit like a stablecoin, and the price of the various operations changes relative to their computational costs and usage. But RCs are of course derived from Hive Power, and the price of that fluctuates, but if we had a kind of RC invented from scratch and not priced in fiat money, would it be possible to make a stablecoin like that?

First of all RC is free. Sure, it is a byproduct of vesting and VESTs are subject to inflation (well, liquid HIVE is as well, however you can sell it immediately, while VESTs are harder to escape from when their value decreases due to printing of new HIVE). But I don't feel "spending" RC is really paying for activity on blockchain. The system is just a way of persuading those that have too little of it to maybe invest couple of dollars in HIVE. If you are asking if it would be possible to invent RC-like token that would reflect a cost of running blockchain infrastructure, then the closest would be HBD. HBDs reflect fiat dollars and those can be used to pay for computers and power bills. Using HBD to pay fees for transactions would be like paying banks for debit card issuance or making money transfers - it functions in real world so it could also function on Hive. That would change the financial model though, since now transactions are (pseudo) free. I feel like changing "free" to paid would not be well received :o)

No, no, I'm not thinking at all about changing the RC model. I'm just thinking about possibilities for creating a different kind of stablecoin than the ones in existence. Right now, all stablecoins are designed to follow the value of fiat money. What is fiat money based on? Governments, oil, things like that. So the crypto stablecoins are dependent on those things also. I am just wondering if it would be possible to create a stablecoin that is fundamentally based on something else, such as the cost of computation. Something like a unit of computation that all products and services can be priced in.

If that happens, the change would be to tie budget of history and execution pool to the max size of block

I don't entirely agree with this. It might make sense to increase blocks in order to reduce short term rejection while not necessarily allowing longer term chain growth. In fact, if RC were robust enough (it currently isn't), blocks might even in some really theoretical abstract sense be unlimited as a matter of individual hard byte limit (an actual limit might depend more on execution time and other resources), but you wouldn't want long term growth to be unlimited.

really interesting and a lot to take in!

sounds like the RC delegation feature might not be just "all good", thanks for pointing that out. Hope the team with @blocktrades didn't encounter any bigger problems with the HF being so delayed.

@tipu curate

On internal schedule the HF26 should have been out last year, but we were waiting for OBI. In the meantime one change after another kept getting added. At least now the team is in that state where no one wants to change anything in order not to accidentally break the code, so we just review open issues and add tests. Let's release the thing already! :o)

yeah, the silence from the team has me a bit worried that they broke it though 😅

I feel that it is going to be difficult to delegate RCs to accounts that will need them as I don't think anyone will want to do that manually. Hope there will be something to address this

It's not hard for any large stakeholder or app operator to do those delegations with a script/bot. I'm pretty sure some already do that with HP delegations.

what would be obviously useful would be if one could delegate RCs to e.g., (not) random accounts that are active and have less than 10 HP or so. Hope somebody will write a script like that.

There are already a number using things for HP delegations for the same purpose, they will just have to be re-jigged for RCs. I think where there is a lot of potential is through the curation accounts, where for example, @ocdb could delegate RCs to new onboards and use the HP to vote for value as they ramp up.

great, hope we get a manual/app for when its here

I am not sure if this is the way it will be or not, but is it possible for an account to give authority to delegate RCs?

For example, I delegate HP to a curation account like @ocdb and grant them the possibility to delegate that portion of my RCs to new onboards.

It might be too complex with undelegations etc...

As far as I remember posting key is required to set (or clear) RC delegation, so yes, it is possible. There were talks about introducing custom authorities that could be tied to specific operations or even types of custom_jsons, so when that is implemented you could allow some service to manage RC delegations for you while not giving it the authority to vote or post in your name.

That would be nice. I have always seen the possibility for there to be liquidity pools of RCs available for apps that could draw upon it. Similarly, there could be LPs for accounts also, where those in need can buy from the pool, distributing the HIVE to providers. These could be paired together also, where buying an account from the pool will also give it adequate RCs - and of course, three could be "hot seat" accounts that apps use where a user can function, but not own the account directly and when they log out and back in, they occupy another seat.

Again, perhaps too much complexity at this point, but thinking ahead for scalability, if there does happen to be millions of new users coming in.

and here I was thinking it will be all sunshine and rainbows. Still excited for the new HF but the growing complexity will be a challenge for onboarding people

Congratulations @andablackwidow! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s):

You received more than 1500 upvotes.
Your next target is to reach 1750 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Support the HiveBuzz project. Vote for our proposal!

I have Keychain set to auto claim accounts and have 135 of them so far. I want them to get used but we still can't transfer them to others. I've noticed the r/c cost vary extremely in the account claim cost so it is good to see a bit of explanation for why. I would instead love to have a pool that I could delegate my r/c to that would then use that for onboarding and getting new accounts going. I think it's pretty lame for a new account to be handcuffed so much to begin with. The community sniffs out the garbage pretty fast.

Its sad to say but I was one of. Those who cashed on hive.. But atleast I also cashed on steem😂 and steem be dying like fuck now

Congratulations @andablackwidow! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the day.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Support the HiveBuzz project. Vote for our proposal!

I honestly don't understand much about this issues, I'm justo learning, but the Quality of the post seems very good to me.

Thanks for sharing.

I have a feeling this will directly affect my Podping operations but I can't for the life of me figure out what will happen till it happens. I'm just hoping that a large delegation of RCs from my main accounts can replace the delegation of Vesting and this will take up the slack.

Alternative is even worse - whales giving out free RC delegations proactively. It would mean great amount of new RAM consumption.

This is the most probable scenario. Frontends/bots that were creating new accounts often delegated some HP to make the life of the new account easier (it's really easy to reach the RC limit at the very beginning of using Hive) so I believe they will change the vesting delegations to RC delegations because it's just cheaper.

Thanks heaps for this, mate. I haven't kept up with updates, so good to know what's coming up. Be interesting to see how this affects botted Splinterlands accounts.

Congratulations @andablackwidow! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badge:

Post with the highest payout of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Hive Power Up Day - September 1st 2022
Support the HiveBuzz project. Vote for our proposal!