Core dev meeting #56

in #core3 months ago

@howo

In terms of updates, I've been quite sick for the past two weeks, so it's quite lean. But long story short, I heard of a new community bug that I'm looking into.

Somehow communities managed to get rid of their owner, which is not something you're supposed to be allowed to do. So I'm looking into that. Meanwhile, I'm butting my head against the CI changes in Hivemind. As I talked about it in chat yesterday, somehow I cannot get the results anymore. So that's something that's worrying. Well not worrying, That's something we'll need to look into. Yeah, more bug fixings, and then I'll be back to finish the beneficiaries reward updates.

@blocktrades

Okay, so I was actually confused about that part about that. So I was able to download the artifacts from that job. Did you see the download button on the side?

@howo

Yeah, basically, you can download some artifacts based mostly a bunch of logs that are cannot useless in theory. I mean, what I'm looking into is the actual test results. Previously, in the artifacts, there was the XML and the JSON files that showed the actual tests that were failing. Also, on Gitlab, you could see a full test result that will be displayed. I can show you an example after the meeting.

Well, I mean, I know the normal test results. I didn't bother to look at the test results, because I thought you just said you couldn't download the artifacts.

@howo

Oh, yeah, sorry, I meant these specific artifacts.

Okay, yeah, I'll look into that offline. I know we can talk about it, I guess.

@howo

But yeah, that's mostly what I'm working on right now.

@blocktrades

Okay, so we're still just, I guess, doing final testing. We've ran into one issue with Hivemind that showed up on two of our test systems. And it's, for some reason, it's stalled during Massive Sync and on both of those systems. I'm not sure we're going to be able to repeat it anymore, so it may have been a one-time occurrence, because it happened at the same time on both systems. So maybe it's something very, just very strange. So we're still, we've been trying to debug that for the past couple of days, just trying to figure out what's going on there.

But other than that, I think everything looks pretty good. I mean, I think we're pretty much ready for a release, or at least a new release candidate prior to the release. So my plan is to, I've been syncing, I'm syncing up new versions of everything now, and I'm going to start serving the data from api.hive.blog off the new code, and that'll give people a chance, some of the apps guys, a chance to test their code against our server and also against other servers and see if there's, if they have any differences. Now we've been running, we've been using Go Replay to check the differences between the API responses from the new code and the old code. And everything was pretty much as expected. We found a few, we found a few small bugs in the new code, but those are fixed at this point. And the remaining differences are actually tend to be bug fixes in the new code versus the old code.

So everything looks good at this point in terms of the responses of all the API calls. So I think we're, like I said, we'll run it, we'll run the release candidate for, you know, a couple, maybe a week, just see if anybody reports a problem. But if not, that'll be the, we'll just go ahead and release everything.

I don't know if there's any questions about that.

@gtg

I'm not sure about the good account history feedback about decreasing the limit to reasonable amount. Do we have any feedback from?

@blocktrades

No one said anything, but I mean, it's a straightforward change for them. So it's not like it won't work. If they don't do that, it just, it'll be lower performance. So it'll be obvious that they want to fix it. So I don't, I don't anticipate any real problem. But yeah, no one said anything about it being a problem for them. So I took that to mean there wasn't, I mean, I don't know what else to do.

@gtg

Yes, yes, no, no other choice.

@blocktrades

I can, I can say here's the thing that needs to be changed and, you know, let me know if there's a problem with that.

@gtg

Actually, that's not a breaking change. I mean, breaking.

@blocktrades

No, it doesn't break anything per se. It's just a performance issue. If they ask for a thousand, thousand operations, they don't really want a thousand operations. It won't be good for them because they'll get more data than they really want to deal with. And it won't be good for us because they're asking for us for more data than we, than recently needs to be asked for. But I don't see it as a, it's not a critical problem. And I think it's an easily fixed problem. So I wasn't, I'm not really too worried about it.

@mcfarhat

Silence usually means yes.

@blocktrades

So yeah, that's the way I took it.

@mcfarhat

But it's actually, I mean, it's, it's a great change. I mean, I liked how, how you handle this because it's making things easier to, to just fetch a specific type of, of transactions much quicker. And I mean, it makes a lot of sense.

@blocktrades

So yeah, I mean, it should be, it should be better for everybody. And we would have done it this way at the beginning, if we could have, it was just that the old technology that was, you know, for the account history just couldn't, we tried it and it was just, it just didn't perform well. Yeah, too much of a load on HiveD.

@mcfarhat

By the way, speaking of this update, I just want to ask you, you know, we're running the HiveD instance using the HAF server as you're aware now. So it's running perfectly, but Hivemind, I've had a lot of issues with it. I don't know, whenever we are close to sync, and we actually reach the sync for Hivemind, and we start serving the whole server, all, all the services. I think there is some, some issue that causes some malfunctioning in the database. I don't know if it's an old issue because I had to wipe the data again and restart the Hivemind sync separately without making any change to the, to the HiveD. But I, I don't know if I should do a, a git pull again and just fetch any changes because I remember you guys worked on a new script with the disregard Fiat. I don't know how this might affect the current setup that I have for the server. Should I go ahead with this? Just do a git pull and update everything. And I'm just concerned of breaking HiveD and doing a re-sync for another few days. That'll be...

@blocktrades

Yeah, I understand. I would say wait, because we're about, once we tag all the release candidates, you're going to, you're definitely going to want to update at that point because there have been fixes that you don't have yet. But when you do that, we'll also, we're also going to make available snapshots. So you'll be able to just download a snapshot.

@mcfarhat

Okay, so I'll wait. I'll just hope that Hivemind will be able to sync within, I mean, a couple of days now. If it doesn't, then I'm, I'm just going to wait for you guys to...

@blocktrades

Yeah, I'd say it's great. I mean, I'm going to run a new, I'm running, I'm going to run a new sync of the new Hivemind today. I'm going to start it actually. And so I expect it's going to take about three days to sync. And after that, we'll put it up and make it available.

@gtg

Also, then we all will profit from you running those release candidate versions because there will be more eyes on, on those specific versions.

@blocktrades

Yes, yes, for sure. Yep. So we'll have, we'll, so about three days, we'll have the new stack up and three, four days, give it in that range. And at that point, we'll, like I said, I'll, I'll tag, actually, I'll tag all the releases probably, probably tomorrow, I think I've tagged a bunch of them already, but just unofficially, but probably tomorrow I'll tag everything. And so if anybody wants to try it on, you know, before, before we have the, the, the snapshots available, they can do it on the, they can run it in their local machines. I mean, their own machines and re, or they can wait for the snapshot once it's up.

@mcfarhat

Okay, excellent. Excellent. And another question. I noticed something whenever I'm loading HiveD and I download the artifacts file, it never accepts the artifacts file. It gives me an error on the current version. And it recompiles the artifacts file on its own. Is this something common or is this something happening just on my server?

@blocktrades

No, that's, that's not common. It's usually check the artifacts file. It always does a check of the artifacts file. But if the artifacts, so are you downloading a new, what are you downloading?

@mcfarhat

I actually go to, to a Gandalf site and I just pick, I pull both files together. So the artifacts and the log file. And I put those into the blockchain directory and I just start the replay. And a single time I get the error of the artifacts. Yes.

@gtg

it will be like that because there's no way that you can have a block log and at the same time, artifacts file that matches that block log.

@blocktrades

Oh, are you, are you, are you updating yours constantly? Are you constantly updating the one that he's downloading? Gandalf?

@gtg

Yes, it's, it's a live artifact.

@mcfarhat

That's what I thought, man. It came to my mind. I mean, it's not going to work because they're not going to be in sync. That's what I thought.

@blocktrades

Well, no, I mean, they should, well, if, if it was what I would have done, which is just theoretically take a, take two files and copy them over there. But you're getting live ones that are, I mean, the other option would be to just have a one that doesn't change all the time. And then they're perfectly matched.

@mcfarhat

so you would save us two hours Gandalf.

@blocktrades

wait, it takes you a few hours to generate an artifacts file?

@mcfarhat

I don't know. I think so. Yeah, yeah, probably.

@gtg

Okay, so, so you really don't want to download over and over again, the block log file from, from my side, because you most likely have perfectly fine block lock on your side of things. So once you download that block lock, it just keep, keep a copy on your side. And then you can safely re get the missing part or just use the, the file that you have on.

@blocktrades

Yeah, I mean, you don't really need to up to date block log any block log that's, you know, even weeks are old is not any kind of problem. It'll, it'll, it'll catch up the date very, very quickly. So once you have a block log.

@mcfarhat

So you think the catch up would be faster than actually grabbing.

@gtg

yeah. Because there's, when you download from, from my server, it's max, you can get the speed of my server and with peer to peer network, you are getting it from all other peers. Of course, in case of downloading from my side, you are not validating it, which is nice because it means you trust me, but you shouldn't. And when you are doing it from peer to peer, it's validated. That's why it's slightly slower. But, but yeah, I think nowadays it's, it's not a big deal.

@blocktrades

it's not like it used to be, it used to be a pretty slow process, but we speeded the heck up out of the sync synchronization process.

@mcfarhat

Yeah, yeah. I mean, I noticed I, I mean, it's, it has sped up magnificently and I really love this, but I mean, still, I mean, whatever time we can save.

@blocktrades

So basically what you should do is get the block log from, get the block log from Gandalf, generate the artifacts file one time, and then just save it off somewhere. So you just have that. And then anytime you need to restart or to re-sync, you can just copy it back over and use that.

I'm trying to think of anything else worth mentioning. I don't know if anybody had any questions about anything I wrote in the last post about changes that will be required or differences that are coming out with a new, new stack. I don't know if everything was clear.

@mcfarhat

No, not really. I think it's, it's been good, but I had a question actually about the block explorer, how, how much progress we're making there. I mean, we're eager to...

@blocktrades

We're going to release that as part of the initial release. I wasn't sure if we were going to fit it in, but originally we were still, it was still a little slow in my opinion. It was taking around 72 hours to sync. And now it's to like maybe 16 hours or something like that on our fast machine. So it's gotten quite fast now. And we've also dramatically reduced how much storage space it takes up. So it's actually very, very compact now, relatively speaking. So I don't think there's any reason, there's no reason why all the API nodes shouldn't run the block explorer back end, at least. They don't, maybe not necessarily the front end, but that's up to them, of course. But at least the back end, and hopefully we'll have a bunch of people running the front end too. So that will leave us with a bunch of distributed nodes able to basically act as a, a common block explorer.

@mcfarhat

Yes, exactly. We're looking forward to that. We really want to run our own instance of block explorer.

@blocktrades

So yeah, hopefully on this next update. Yeah. Yeah. And that's, that's definitely one of the apps that we're going to just default. So it'll, we'll even have a, as part of the snapshot, we'll have that in the snapshot.

Let's see. Any other questions? I don't really have a whole lot else to report. I mean, we've been doing a lot of stuff, but it's just detailed stuff. I don't think it's going to be interesting to you guys particularly.

@howo

Yeah, I don't have anything else on my end, apart from like the whole CI thing, but we can take that offline.

@blocktrades

Okay. Anybody else have anything they want to report in terms of work they've been working on or questions they have?

@gtg

Oh, there is a request for homework from @crimsonclad. She's asking about a roadmap, decentralized kind of roadmap that we should all think how to contribute in a way that we can present in some high level roadmap. Our apps. So I guess we can think on that before the next meetup or preferably earlier,

@blocktrades

because yeah, we should, we should, I guess we can just, we can discuss it in the various, various chat channels and stuff and just start contributing. I mean, it sounds like what we really just want to do is collect a bunch of information about what everybody's planning to do next and just sort of, so we all know what each other's working on. Yeah, so plans for 2024. Yeah, yeah, sounds like it.

@mcfarhat

It can be as simple as meeting, not minutes, pre-meeting minutes or schedule or something and it could be a Google doc and just put a few points, bullet points and make it for discussion.

@blocktrades

Yeah, yeah, sounds, sounds good. I'm probably not going to be able to work on my part of that until at least after I get this release out, but like I said, hopefully that's finished up this week, at least where we can start thinking about other things.

@arcange

Should we talk about our suggestions about sunsetting reputation?

@blocktrades

Oh, I just saw that, that came up yesterday or I think, I didn't really read it tightly. I guess, so I didn't read it closely, I just read that there was some discussion of maybe dropping reputation or doing something else. I guess my feeling on it is reputation, while it's fundamentally flawed and we all know that, it still serves a useful purpose for the moment. So I don't think it makes sense to drop it per se. There's obviously various things we could try to do to improve it, but nothing really straightforward for me. I don't know if anybody has any particular ideas they want to suggest, that's fine, but I tend to think the base system is not completely broken, so I don't think an easy fix comes.

@howo

My main suggestion is basically capping it so that we don't have to do unnecessary calculations.

@blocktrades

That makes perfect sense to me. That's actually not so much to say this, but...

@howo

Because basically, if we kept it at 60 and at zero, because after 65 ish, it's basically exponential and so it's basically meaningless. So my whole suggestion is to cap it between zero and 65 and after that, hivemind stops calculating and that would potentially save as a bunch of processing.

@gtg

Well, maybe not as low as 65, maybe 70.

@blocktrades

Yeah, 70, 75 somewhere.

@gtg

Because then it gets really ridiculous. Scammers quickly get above my reputation level that I have for years and it's a unique name that I don't even realize. You don't want to lose that.

@blocktrades

Yeah. Okay, so yeah, I guess we need to figure out a number, but maybe we can also just find out how much difference it really makes. So we could try it as a change. Howo would you want to... I don't know if it's something you just want to try as a change and just flip in a code change and then see how it performs?

@howo

I haven't looked at the repetition code in a while, but...

@blocktrades

Well, I think one thing that's important to mention is that we're actually stripping out the reputation code into a separate HAF-app. So it basically, it may solve the whole issue from that point of view.

@howo

Yeah, it's true, because it's mostly in regards to the hivemind performance at this point, really. And so if it could be part of a separate app and potentially even run it like optionally for full nodes, that may solve the whole issue.

@blocktrades

Yeah, so let's see how that work plays out and then we can revisit this after that.

@howo

Yeah, I am down for that.

@arcange

Does it really have such an impact on performances?

@howo

Yeah, it's huge because you have to compute all of the posts and all of the votes, basically. Whenever you make a post and whenever you get the votes, you have to compute the reputation aspect on it. It's less the case now, but when we took over from STEEM to Hive, that was the number one performance hog on Hivemind.

@gtg

Like 10 hours of computations.

@howo

Yeah, it's insane.

@mcfarhat

For us, as active front-end, we don't have that reputation at all. We have a different system for us, but it doesn't make much sense for us because of how it works now. Maybe in the future, if it changes, we can implement this if there is an alternative implementation via Hivemind. But yeah, for now, it's giving you some sort of an age and activity of the user on the platform, but you get down votes, you're going down, so it doesn't make much sense from our perspective. But yeah, we're all for whatever the community decides.

@gtg

It's not for developing new fancy solutions, just making quick fix. So we don't have to spend time on that because it's beyond repair, the old system. So yeah, just quick fix.

@blocktrades

Maybe a patch is fine. If there's a good patch, that's why not. Okay, well, like I said, let's wait till the reputation app gets finished up. I doubt it'll take that long, honestly. And then we'll see where we go from there. All right, I'm done for that. That's some good progress there. Yeah, I don't have anything else. Do you guys have anything else? Otherwise, we'll just close the meeting.


Sort:  

!PIZZA
!LOL
!ALIVE
!PGM

Sent 0.1 PGM - 0.1 LVL- 1 STARBITS - 0.05 DEC - 1 SBT - 0.1 THG - 0.000001 SQM - 0.1 BUDS - 0.01 WOO - 0.005 SCRAP - 0.001 INK tokens

remaining commands 1

BUY AND STAKE THE PGM TO SEND A LOT OF TOKENS!

The tokens that the command sends are: 0.1 PGM-0.1 LVL-0.1 THGAMING-0.05 DEC-15 SBT-1 STARBITS-[0.00000001 BTC (SWAP.BTC) only if you have 2500 PGM in stake or more ]

5000 PGM IN STAKE = 2x rewards!

image.png
Discord image.png

Support the curation account @ pgm-curator with a delegation 10 HP - 50 HP - 100 HP - 500 HP - 1000 HP

Get potential votes from @ pgm-curator by paying in PGM, here is a guide

I'm a bot, if you want a hand ask @ zottone444


@howo! You Are Alive so I just staked 0.1 $ALIVE to your account on behalf of @ cryptoyzzy. (3/10)

The tip has been paid for by the We Are Alive Tribe through the earnings on @alive.chat, feel free to swing by our daily chat any time you want, plus you can win Hive Power (2x 50 HP) and Alive Power (2x 500 AP) delegations (4 weeks), and Ecency Points (4x 50 EP), in our chat every day.

Sorry about your health... Wishing you speedy recovery. Here are some virtual flowers💐💐💐 Get well soon

PIZZA!

$PIZZA slices delivered:
@cryptoyzzy(3/5) tipped @howo

Sorry about your health and I wish you quick recovery as fast as possible
Also, what do you mean by sun setting reputation?

Suena complicado pero éxitos resolviendo todos los problemas. Gracias

Your health challenges notwithstanding, your commitment to tackling problems is something commendable.

Your struggle with bugs and making Hivemind run without hitches is a priceless contribution to the community.

I personally appreciate that you are always open about what’s happening and being discussed in core dev meetings. Keep up the good work as you strive to improve Hive ecosystem because it requires dedication to one’s job. Get well soon and may you continue achieving your goals!