Core dev meeting #46 (with full transcript)

in #hivelast year

Hello all ! New month new core dev meeting. This one is quite short. I apologize for the framing + distorted sound, something went wrong with the recording.

I'm trying a new format given that the sound isn't great, using https://openai.com/blog/whisper/ I built (and edited) a transcript that you can find below. Tell me if it's worth the trouble of if you prefer the previous thing that I did (quick summaries)

@howo:

Well, I don't have much for this meeting. I'm mostly working on the Hardware 28, the Recurrent Transfer Improvements. it's going well, I have the initial implementation, but I'm working on making all of the tests work now. it's kind of a pain because I have to update all of the existing stuff to support the extension.

I have a bunch of discussions to bring up afterwards to get your feedback on some design decisions.

@blocktrades:

So I guess in terms of things that we're doing, I guess one of the more important things is that in the next release of HiveD, we are basically dropping Ubuntu 20 and requiring Ubuntu 22 as the development platform.

A couple reasons for that:

I guess one is RocksDB. We're updating the version of RocksDB we're using inside HiveD. And that version, we need to switch to Ubuntu 22 for that. Partly because it uses a more advanced file io library that allows faster file io access.

The other reason we're switching to Ubuntu 20 is just to get away from having to deal with both Postgres 12 and Postgres 14. Because those two versions of Postgres have been behaving quite differently at times in queries and it's just more work for us to support fast queries that work equally well on both Postgres 12 and Postgres 14.

So we'll be officially moving to Postgres 14 as well as part of the change. Other things we've done, we found some bugs in the int128 library that's inside HiveD. It's in the FC library. So rather than work on fixing those bugs, it just made more sense to replace it with the standard int128 library.

So we're using that instead. We found issues along the way while we were testing that. That's one of the things we're working on.
We've made some tests for massive recurrent transfers just to test performance when there's a lot of them. I guess @howo was pretty aware of that probably.

We fixed the problem related to truncation of account names that could happen when they were too long and the fixed string inside HiveD. There's a bunch of small things we've done in there.

I guess one of the most significant things too we're doing is in HiveD that's related to HAF is we're changing the way we're writing operations data into the Postgres database now so instead of writing it as JSON text, we're writing in binary format, and that's allowing us to significantly shrink the size of the HAF database.

Correct me if I'm wrong, but I think it was around 700 or 800 gigabytes we're saving with that change but it's been a while since I looked at the number so I can't be sure. Lately we've been doing performance testing to just assure ourselves that it won't have a significant impact in terms of performance. There won't be any significant performance degradation with the switch but seems like it's pretty good.

So that's been going on and I think those are probably the two most important things for other devs to be aware of is the Ubuntu 22 switch and also that we're switching to a different internal format for the operations that are stored into the HAF database.

@imwatsi:

I've released the delegations API. It's got a number of endpoints, but the feature of note is that you get to see both incoming and outgoing delegations to an account, as well as an overview of balances and a history of all delegations made to or from an account.
I also added delegations notifications to GNS, which is the global notification system.
And yeah, from probably this month I'll be focused on GNS.

Currently working on author curation and comment benefactor rewards. There's more details about other projects that I'm working on. I think you can find them on the last post I made from the @freebeings account. it's quite a lot and I think it would take up too much time to talk about here.

@blocktrades:

I actually had a question for Bartek. Can we switch the operations boundary format. How much impact do we expect that to be on existing half applications.

Bartek (apologizes I couldn't find his hive account)

I hope the interfaces to existing HAF applications won't be touched anymore. And hopefully all required changes, which should be made is at most cast the Jason when some someone requires to process them for the operation. At least in our tests specific to hivemind and specific to HAF applications, showed that there was no required significant changes to start such applications using the new binary hub.

@arcange

Yes, I got a few questions from some people I had a talk with.

They're asking me about if we have a long term roadmap or vision for what will be coming next. They're always asking about smart contracts. I know you would like to see smart contracts on layer two.
Sometimes they say for when we need trust might be better to have them on layer one. So what's your vision on this ?

@blocktrades:

Yeah, so as far as the roadmap in general we were definitely planning on putting something out soon.
We've just been tied up with basically finishing up tests we've been developing and cleaning up for a lot of stuff before we get into too much new stuff. Probably the next month we'll put out a roadmap.

As far as smart contracts yeah, I can say right now that's one of the big things on the roadmap coming up.

And as far as I think it's hard to address the issue here first layer versus second layer but I'll certainly write about why I don't think it makes any sense at all to do it on the first layer and so the second layer.

I've written some stuff in the past about it, but I can certainly write more if there's people who have questions on this issue because I think it's pretty straightforward why it makes more sense to do it on the second layer than the first layer.
So the implementation we're looking at for smart contracts is basically SQL, as SQL is the language.

Essentially, we see the smart contracts is being implemented on top of a HAF database. Someone will be able to come in and basically publish their contract as a transaction and then, You know those contracts will be able to run on the HAF database itself. So HAF databases will be the smart contract processors.

I mean that that should give us pretty much everything I think that'll allow us to do a lot pretty cheaply because SQL's already got nice security system in place for protecting one smart contract from messing with one another. It's already got a lot of things in there for handling resource limitations so that, for instance, some contract doesn't tie up the database entirely, you know, grab CPU. We can, you can limit how long query runs for before it'll just stop.

So, a lot of stuff there that's just really nice for us in terms of the smart contract environment that's well sandboxed I think.

@arcange:

So your ideas to broadcast to SQL code to the blockchain that we send it to HAF.

@blocktrades

The the HAF database will get all the other data from the blockchain. If you're running the smart contract app if you will, on top of HAF, it'll it would process those smart contracts. It would verify they are sound and be available for execution.

@arcange

Could a contract make changes in a HAF server ?

@blocktrades

Yes, absolutely. Yes, sure, it wouldn't be storing and it wouldn't be making changes in the blockchain it'd be making changes in the HAF database.

@arcange

And what about our contract changing balances.

@blocktrades

No they wouldn't be able to change balance of HIVE or HBD. They'd only be able to change balances of new tokens.

@arcange

Okay, so you can automate transfer some things like that.

@blocktrades

Right, you can do anything basically with the second layer tokens. I mean, it's, it's still early right. it's a general vision of it the details will come over time.

Howo : I edited out a discussion about recurrent transfer ID, it's not very relevant to the audience, feel free to listen to the video :)

Bartek (apologizes I couldn't find his hive account)

I have one thing probably for you. During working on the recurrent transfer changes, you probably had to prepare the blockchain source code for another hard fork. Could you separate the changes and put it into a different merge request to provide the code before all the changes specific to recurrent transfers?

@howo:

Sure. I will do that.

@imwatsi

I just wanted to ask because I think @blocktrades mentioned something about token protocol at some point. Like on layer two, like what SMT was supposed to do.

So, yeah, I've been kind of working on that. I haven't written a lot of code yet still doing a lot of like research and stuff. I Just wanted to know, like, are you working on that ?

@blocktrades:

Not exactly, but kind of what we'll be working on is a smart contract platform. So right now if you want if you want to run a HAF app. The operator of the half server has to you have to get that HAF operator to install your app on his server right?

So, you know, like GNS, you know, you're going to have to get somebody to run that on their HAF server. So the idea is anyone who runs this smart contract app.
If anybody who runs that on their HAF server, then that means they basically said that you can now run your smart contract on their server without having to specifically get them to install your smart contract app on their server. So that's kind of the difference between two existing HAF apps.

The operator has to explicitly install it, whereas if they install the smart contract app, then all the smart contracts that get created in the future will automatically run on their server.

I think it's safe to say that these smart apps will work in many ways just like a HAF app, except that they're like auto installed so to speak, and that they can interact with each other and more, where right now each HAF app has its own individual data space and there's no interaction between them. Here you can build apps that are interacting with each other in a common data space.

I think, I think if you start investigating how to handle tokens and sequel that could be useful is a starting point for creating smart contracts for tokens that would run on HAF so I don't say I don't wouldn't want to say that what you're doing is necessarily going to be duplicated or wasted, but, you know, so it's something you could certainly continue looking at.
But I think the ultimate platform for it has to be something that runs in a smart contract system because you want all people to be able to create tokens without having to always get permission from the HAF operators for those tokens

@imwatsi

Okay, I get it. I get it. So I have to keep that in mind as I'm designing it that it should be able to.
Well, yeah. Maybe when I get more details...

@blocktrades:

I guess what you'd start with is I would I would be taking a look at existing smart contracts and other platforms for tokens and see how some of those are implemented.
Obviously, there are going to be differences here because we're talking SQL instead of another language, but I'm just saying in terms of what their functionalities are and how they're implemented. That's something you could start looking at.

End

That's it for me ! Thanks for tuning in.

@howo

Sort:  

The transcription is great!
Any thoughts on exluding the hbd in the dhf from the virtual hive supply? As hbd in the dhf grows it has an impact on the base for inflation

 last year (edited) 

Thanks ! Regarding the change, It was done in HF23 or 24

Just for the debt calculation.... the virtual hive supply (the inflation base) still includes the hbd in the dhf

thanks for the update!

Will the Correction of U.S. Silver and Gold Coinage that will be used to back our New and Improved Currencies, both Physical and Electronic, play a part in any of your near Future plans...???