Core dev meeting #51

in #core8 months ago

@howo:

A new month, a new quarter of meeting. So on my end, thank you for the review bartek. So one of my old HiveMind merge requests has been merged, so an old bug is about to be fixed whenever we release it. I'm not quite sure what the release schedule is for HiveMind, but it's like low priority. So whenever you guys are ready.

On top of that, I started looking at the documentation a bit because Crimson has been telling me that it's been hard for her to onboard New Devs because they are confused by the state of our docs as it's been done by inertia a while ago. So I looked into it, and I wanted to touch base on that with your Gandalf because from what I understood, you are also looking into it, so that's kind of my points for the meeting.

I have a bunch of code locally that I haven't pushed, but I'm going to submit the communities type 2 and 3 probably this week or the next for review by bartek. It's mostly tests at this point.

@blocktrades
OK, sounds good. So you mentioned release schedule, so it's very appropriate since I think we're going to have a release. We're going to do a release candidate probably this week for just about everything, I guess. So that means HiveD, HAF, and pretty much all the Hive apps and some of the other new apps as well, hopefully. So the release candidate will come out, like I said, this week sometime, and then we're aiming to actually do an official release around the end of the month.

So in the meantime, we'll basically solicit people to start trying to try it out. We'll run it on production tests, and we'll also be doing performance benchmarking and things like that during this month. So as far as, I guess I put out a post in previous posts that have really documented, I guess, what's going into the new release. But we'll also create an issued list of release features, too, along with that when we do the release candidate. So I guess in terms of things I can think of that are probably biggest changes. One would be in HAF that we're storing the operations in binary format.
So I've been encouraging anybody who's doing HAF development for a while to use the development version, so finally, we'll be to the point where they can use the master version, I think, without having to necessarily use the develop version for everything.

Beyond that, another thing that's kind of important is that we've created new record types for all the blockchain operations in HAF. And the idea behind this is, first of all, it makes it more performant. So it's sort of like a best practices way to create your HAF app. You wouldn't necessarily have to use them, but it's extremely well, extremely highly recommended that you do. It'll also make your app easier to combine with other HAF apps. So I guess it's probably been a while since I talked about this idea, so maybe just to mention again, the whole one of the driving forces behind HAF really was that we were looking at the beginning of how to improve the API of Hive.

This is also, I guess, significant in terms of talking about documentation. So the idea I kind of settled on was that we move most of the API development outside of HiveD itself and into basically HAF apps.

So the idea is Hive pushes all the data into a HAF database. And then we can create whatever kind of APIs we need with that data, and it gives us a lot of flexibility. So on the one hand, it means you can create your own custom API, but on the other hand, it also means that we've got a way that somebody can basically create an API and then provide it to other people.
So as an example of that, we created this balance tracker app, which we've been improving recently. And it gives basically an entire new API for doing things like graphing balances of an account over time, for instance, something that before was, you basically had to do all the work yourself. It was a decent amount of work to do. We're also creating a similar API for the block explorer that we're creating. So the block explorer that we're developing can be divided into two parts. There's the API, which is a HAF-based app, and then there's just a UI that runs on top of it that actually uses that API. But that API can be used by anybody who's developing a block explorer or just anybody who wants to make some of the calls, the API calls that are available in that block explorer API app. So I hope that's clear as to why HAF, I think, is really important. It's really our next generation API in many ways. It's just a, and it basically redefines the flow of how we create our API. In the past, the API was kind of a very centralized process to change it because all the changes had to be made at the HiveD level, which is sort of the most security sensitive area. Whereas now, basically anybody can create a new API that they plug in on top of HAF, I mean, on top of Hive just using HAF app to do it.

We've seen a couple people do that just recently. @mahdiyari, in fact, basically created sort of a similar tool to the Hive SQL app, which basically gives us a new API as well. And also, mostly created one for notifications. So I hope that's clear as to why I think this is really important for us, this idea that anybody can now add and modify the API itself if they want to, and they can change its characteristics for performance or for ease of use, whatever they need. So anyways, I guess that's the gist of what we're doing, I guess, in terms of HAF. The other things that have been done recently, we made some more bug fixes and improvements, aimed at easing the life of the node operators. So we've been making changes to the block log tools, improvements to the replay and sync process. And we also got a couple of new apps that will also be released as part of this upcoming release. We have BeKeeper, which is essentially a new way to manage keys and also to create transactions. And we're hoping this will be like a library that basically everyone can use going forward. So we'll have a very small footprint, very well audited way of people can use to, anybody who's building an app can generate transactions and sign them.

So it was really two versions of BeKeeper, one's a C++ based app. And that's really for essentially just desktop applications. And then we've also got a new tool we're creating now, BeKeeper Wasm, which is basically a web assembly version of BeKeeper so you can write it inside a browser. And I think that'll be really interesting for anyone who's creating javascript based apps with Hive. Beyond that, we also got two other apps that we're creating right now, Clive, which is a replacement for the CLI wallet that's been around for ages and really hasn't been done much to in a while, so we've been just basically it's been on maintenance. And so Clive is basically a much easier to use version of that. It's got more of a, even though it's a console app, it really has a GUI, so that makes it a lot more friendly. And the other app we're working on right now is Denser, which is a replacement for Condenser. This is essentially a totally new app that just models the look and feel of Condenser, but is using more new technologies and we're trying to develop something that's much easier to maintain. And both of those are proceeding pretty well. I think we're going to include some version of Clive in the new release. Denser is really a little more separate, so I don't know if we'll see a release for that, probably not. So that's kind of what we've been working on.

Oh, and since how I mentioned those improvements to the community roles feature, yeah, that's also been incorporated in Hive, and that'll be released as part of the general release. So essentially every HAF app almost is going to get a new release as part of this process. That's pretty much it, I guess, for our plans for the next month.

Anybody else? I don't know if anybody else wanted to cover what they've been working on. Or if anybody has any questions, actually, maybe let's, actually maybe better to open up for questions on any of that stuff first, because about the new release or any of the new features or tools or anything.

@borislavzlatanov
I had a question about the APIs, or basically I'm following on from what you said. I had, I had wants to see feedback about potential, new, like an additional layer to everything you just said about the APIs and the flexibility of sort of empowering the devs to create their own API, how they want it. And this is the sort of basic, I guess, principle of it would be more robust if there was like many small APIs, like even...

@blocktrades
Yeah, that's exactly the intent of HAF. In fact, I used to call HAF of the original origin, I used to call it modular hivemind, which was the, and I think I might have stole the name from @howo, but the idea behind it was hivemind had this really large API of all these different, various somewhat unrelated API calls. So we wanted to have a way that you could basically create apps, we would have smaller sets of APIs that you could decide, I want to support these APIs on my HAF server and then I'll run this app. So basically by choosing what apps you run on your HAF server, you essentially decide what APIs you want to support on your server. And then you can either make that a public HAF server that other people can use the APIs on, or if you're doing your own private server for your application, you could just pick and choose which ones, which APIs you need to support.

@borislavzlatanov

Absolutely. that's really great. And I guess where I'm going with this is potentially in order to reduce the server itself, like the, in terms of hardware needs, because right now you have to have, in order to have HAF, you have to also run HydeD, which creates own server requirements. So if there was a way to sort of have very tiny for hardware requirements and provide an API, and so you could have a large number of APIs, in addition to like just an additional layer, and how that could be done potentially, is if there's like a standalone database, a popular database that like queries a full HAF note, and just gets in real time, with only the blockchain data that it's interested in, let's say only this type of custom JSON, or something like that, right?

@blocktrades
Yeah, that's totally feasible. I mean, that can easily be done today. I mean, essentially, like I said, what you're talking about really is even possible now, because when somebody's running an API node operator, he's running a server, if he's running, I mean, there's still some legacy stuff, but if he's running the latest stuff, then he's already running a HAF database that's already got a huge amount of data, and he's running, it's got all the data for HAF, for basically all the account history API, for instance. So you could then create another, you could basically make calls to his node, and then just save off the data that you want to save off to your local database, that's completely feasible.

I guess the issues I would see right now with doing that is if you did it without HAF technology backing your own local database, and maybe there's something that I want to think about, because the thing about having it back by using HAF as the database, rather than just using a generic database, is that in the case of forking, if you're running a HAF-based database, it'll automatically fork back to, everything will revert to the right version, you won't run into inconsistency problem. Whereas if you're just querying a HAF server to get the data from it, then when a fork happens, the data changes on a HAF server, but your database wouldn't necessarily know that the past data you had fetched had changed due to the fork.

So that could be one particular issue with the implementation you're describing versus just running a full HAF app. So I'll have to think about that, see if there's something we could do to mitigate that. Of course, and if you're working in irreversible mode, that's not an issue. So if your app is only getting data after it's irreversible, then that's not a problem at all. And frequently, that's probably just fine. I think now, when we developed HAF originally, handling reversible data was much more important, because it took a while for data to become irreversible. But once we introduced the one bloc irreversibility feature, generally speaking, your data becomes, most data becomes irreversible very, very quickly within a second or less. So maybe it's not such a problem nowadays for that problem.

@borislavzlatanov
Yeah, that's what I was thinking because of the one block of irreversibility. It's basically the API can provide the blockchain data in real time, even though it queries just the irreversible data.

@blocktrades
I don't think anybody would want to, in general usage, they wouldn't even notice the difference. The only time that might come up is would be in case of a large fork, but forked a number of blocks. But that's extremely rare nowadays. Generally, you must see one block that gets reverted.

@borislavzlatanov
Yeah, and it would, I guess it's a must that it has to, well, not strictly speaking, but in order to keep it interoperable, it would have to use Postgres, everything the same as the...

@blocktrades
I think that would be best. I mean, to me, it makes sense if we use the same tools. It makes it easier for DevOps to move back and forth between projects. And that's one of the thrusts we've actually had with HAF is we're moving more and more towards trying to make sure that if you develop your HAF app, you've got like a best practices way to do it that makes it easier if you go look at somebody else's HAF app and understand it. And also that you're doing it the best way, essentially, doing it the way that's most performant.

@borislavzlatanov
Well, ideally it would be... You could even take the code and directly make it into a HAF app, right?

@blocktrades
Yeah, absolutely. Yeah, yeah. And so one of the things I think the most important things we did recently in terms of HAF other than the binary was this idea of creating these record types that basically they're just SQL record types that define the blockchain operations in a standardized way. So if you do that, if you use those for your database, then it'll be easier to make it into a HAF app and to exchange data with another HAF app, too. Another HAF app. It's all in the develop... Yeah, it's all in the develop branch. So once we get the release candidate out. But like I said, I've been telling people for a while that if you're developing HAF, just work in the develop branch right now. There's so many improvements there.

Sort:  

Sounds like HAF is really coming together. Hopefully, it will help create better search and sorting tools for the blockchain. I have always wanted a way to sort blogs from earliest to latest this now sounds like a possibility. Great work.

This should be very easy to do with HAF.

Great. I look forward to the front-ends integrating that feature.