A peer-to-peer network for sharing and rating information

in #hivemind3 years ago (edited)

Why this post now?

In the past I’ve mentioned in passing that my primary goal working on Hive was to build a reputation and information rating system that I feel could have a dramatic impact on how we organize and make decisions, but I’ve avoided going into many details about it publicly.

There’s been a couple of reasons for my reluctance to publicly discuss a rating platform in depth in the past: 1) many people might not have seen the importance of such a system until times like now, 2) I’ve been busy with clear and pressing preliminary tasks and didn’t want to get distracted by talks about such a platform (and there are a lot of potential variations in such a platform, so those discussions could go quite long), 3) there’s many “radical” thoughts that go along with such system that could generate controversy (and controversy can also be a big distraction), and 4) I’m likely to write very long posts on this topic, as you’ll see below. So, to sum it up, I wasn’t ready to spend time on discussion/design/development of the system, so I avoided initiating public discussions and debates about it.

But I’d like to start talking about design of a rating system now, because I’m at a point where I can devote time and resources to the project, and I think recent events should be compelling a reason for many people to examine the ideas around this topic.

Originally I planned to discuss implementation details of the rating platform in this post, but just writing about some of the basic ideas that led up to the rating system resulted in an exceptionally long post, so I’ve decided to defer details to a later post, so that comments/discussions on related topics will be grouped together better. As a bonus, that may defer some of the controversy.

Side note: I’ve always wanted to call this project “hivemind” as it’s very descriptive of some aspects of it’s operation, but I’m a little hesitant to do so, because of the ease with which it could be confused with Hive’s existing hivemind software for processing blockchain information. But maybe we can work out some solution, perhaps by adding some extra words to each project.

What’s the point of an information sharing and rating system?

We are constantly receiving new information, rating the credibility of that information, and storing it away for future use. For most of us, our ability to do this well has a huge influence on how our lives proceed.

Most of this information comes through our senses (in some abstract sense, all of it does, of course, but I’m referring more to events we see/hear/perceive first hand) and most people are fairly efficient at rating the truth of information we receive this way.

Of course, we know that mental illness can impact our ability to even trust the things we directly experience, but this is a rather fundamental issue that is outside the scope of the platform I’m proposing.

The rating platform I’m proposing is aimed at mitigating a problem that we all have: how to accurately rate the information we receive through human communication.

Communication is one of humanity’s greatest abilities

Communication is one of the most powerful abilities we have as humans. Communication allows us to gain knowledge far beyond the physical constraints of our bodes and across vast stretches of time beyond what we could otherwise discover during our lifetimes by sharing the information and insights of other people, even those no longer living. It’s also critical to how we organize to work on common goals, which is also incredibly important, since it’s difficult for any of us to do much completely on their own.

But for all the undoubted benefits of communication, it does come with additional problems, compared to direct perception. As its most basic problem, people can intentionally lie about events they have directly perceived or distort the information to suit their own purposes by framing the events in a biased way, omitting relevant information, etc. They can also simply make up things without any basis in reality at all.
A second problem that occurs is that most directly perceived information isn’t shared first hand, instead it’s exchanged “peer-to-peer”. And as anyone who’s played “telephone” as a child or watched the spread of gossip can tell you, humans are far from perfect replicators of the information they receive (computers are much better at this, fortunately, or blockchain-based financial systems wouldn’t be such a good idea).

A final problem with rating information is that the data we care about goes far beyond the data that we directly perceive with our senses, so even when we see video data and assuming that we can trust that the video not been altered, we need far more data than is contained in the video itself to make judgments about what is taking place. For lack of a better term, I’ll call this “higher-level” information since it’s mostly derived from human computation on the sensory data we receive, using “conscious” thought/principles of logic/etc to arrive at conclusions/insights/etc.

I think it’s safe to say that the majority of the information we make long term plans with is based on higher-level information. And perhaps not surprisingly, it is this type of information that is most contentious because we can’t “see it with our own eyes” and that’s the information we find it safest/easiest to trust. Nonetheless, higher-level information is critical to our lives, and we can’t simply operate by only trusting the information we directly perceive.

How do we rate information today (mostly the opinions of other people)?

Before I go in details on my ideas for creating a computer-aided information rating system, we first need to look at how we rate information today, in order to reasonably discuss ways to improve the process.

There are many ways we rate the information we receive, and for some types of information and some verification methods, these rating methods can be quite reliable. In fact, there are whole branches of information science such as mathematics, where information can be “proved” to be true, generally because they don’t rely on the reliability of external information, but assume certain truths (axioms) then conclude new truths from those axioms.

But it’s actually rare that we validate information using just pure logic or other forms of math, although these can be powerful tools to assist us. For one thing, most people aren’t good at math: they aren’t good at properly framing issues in terms of mathematical models or in solving those models.

But perhaps the ultimate problem with validating/rating of information’s truth is that it can take a long time to do it, even when we could potentially do so.

So, given that none of us have an infinite amount of time to do an in-depth rating of all the communication we receive or even the capability to accurately do so due to locality and physical and mental limitations, what’s the primary way we rate information? We rely to a large extent on the opinions of other people, mixed in occasionally with logical consistency checking against other information we already believe to be true (for brevity, I’ll refer to this internal checking process as “critical thinking”).

If you take a hard look at it, it’s quite amazing how much of the information we rely on in our daily lives is the information we’ve been told is true by others, with little if any independent verification by us as individuals.

Despite the flaws in our rating of information, it generally works quite well, and has allowed us to achieve wondrous things that would never be possible if we all spent our time trying to independently verify everything we’re told.

The scientific method as a means of rating information

On the other hand, while we all can’t independently verify all information, we have found it’s important for more than one person to verify information, because of the human capacity for deception, bias, and simple miscalculation.

One of the key leaps forward in human progress was the establishment of the scientific method as a way for people to independently verify many types of information. You’ll occasionally see people who dismiss science as inherently biased, but these people have missed the whole point of science, which is to remove bias and other forms of error as much as possible from the rating of information.

But even with the scientific method as a means of independent verification, it’s not a method many of us employ directly, and certainly not for most of the information that gets communicated to us. Instead, we trust others who have claimed to perform the experiments and the people who have told us about those experiments.

Rating information by rating information sources (sources are mostly other people)

Note: for the purposes of this post, an information source isn’t just the place we originally learn about information from, as the term might imply, instead it is all the people who give their opinions on the information, which might more properly be called an “information rating source” instead of just an “information source”.

We’ve all learned from experience that we shouldn’t blindly trust the opinion of everyone equally when it comes to rating different types of information. After all, people often disagree, and logic tells us that two opposite things can’t be simultaneously true.

Similarly, we’ve also learned that it is a bad idea to just trust the opinions of one person on all topics. Now that is admittedly a slight generalization, as there are some people who are mostly willing to trust one person on just about everything, but that’s a very bad way for an adult to operate, in my opinion, because I think there is plenty of evidence that there is no one person that’s uniquely qualified to rate all information. This is how children generally start out, trusting the information they receive from their parents over everyone else, but most eventually come to the realization that the knowledge of their parents has limits.

So if we can’t trust everyone, and it’s not safe or smart to trust just one person, and we don’t have time to independently verify most of the information we get, what should we do?

We rate opinions based on motivation, consensus, familiarity, and domain expertise

For most of us, the solution is to rate the opinions of other people based on how knowledgeable they are about the topic at hand (their domain expertise), make adjustments based on how we think their motivations might affect our ability to trust their stated opinions, how their internal biases might also distort their judgment, as well at the numbers of people on both sides of any disputed information. It sounds like a lot of work, but we do it all the time, and often somewhat unconsciously.

Rating of information sources based on perceived motivation

It’s hard to overestimate the impact of perceived motivation when it comes to how we rate an information source’s reliability. We’ve learned to adjust for motivation, because we know self-interest often overrides natural honesty. It’s the same reason we’ll often trust the opinions of a product owner over the product rating given by a salesman for the product.

We can see this motivational distrust now in US politics, where much of the population can be divided into two groups that distrust not only the motivations of the leaders of the other side (who clearly benefit from their information being trusted), but even the motivations of their followers as well.

This latter motivational distrust may at first seem illogical, because at first glance there doesn’t seem to be a self-interest in promoting incorrect information that doesn’t benefit oneself, but here we see a case where human thinking often goes wrong: we have a desire to be right about our opinions, and it is easy for us to let our desire to be right (or the desire for something to be true) to overwhelm our critical thinking ability.

In passing, I will note that I think this “desire to be right” can serve a useful purpose: it allows for some stability in our beliefs, and prevents us from constantly flitting from one belief to another, which could make it difficult to achieve any long term goals. It also allows us to “go against the norm” and motivate the effort to prove that some belief we hold is correct. But I think it’s obvious that if we want to think optimally, we have to be willing to make adjustments to our beliefs when enough contradictory data appears.

Rating of information sources based on consensus

One of the other primary ways we rate information is based on the number of people who agree on the truth of information (especially when there are relatively few people who dispute the information’s truth). In other words, we tend to believe information based on the consensus opinions of other people.

Much of the higher-level information we believe is information believed by other people we identify with in some way. This might even be viewed as an extension of our desire to be correct ourselves: after all if we believe we’re often correct about information, it only makes sense that we will trust others more when they think like us on other topics.

This kind of thinking can go horribly wrong though, when large numbers of people reaffirm each other’s misguided beliefs, and can result in a short-circuiting of virtually all critical thinking by most members of the group (and generally leads to the exit of critical thinkers from the group).

Rating based on familiarity with the source

Everyone has their own unique method of rating the opinions of other people, but we can see certain commonalities used by most people. On average, I think we tend to rate highest the opinions of people we frequently get information from, especially when we see later confirmation from other sources that the information was true. It also helps when we are able to interact with these people in some way, as it allows for us to gain more insight into how they think and what they know.

At the beginning of our lives, this usually means our family, and later classmates, then work colleagues, etc. Of course, we don’t end up trusting information from all those people, either because they are often wrong on some topics or just because we don’t like them. But for many of the people whose opinions we rely on, our interactions with them have allowed us to make judgments about their motivations and their knowledge and rating skills in various areas, and this allows us to rely on them more than people that we’ve never met in person or had a long online relationship with.

The value of personal interaction as a rating mechanism

Many people consider “meeting in person” an especially important method of measuring a person’s trustworthiness, because we have some ability to read body language that’s not available from online sources (even from staged video recordings, when this kind of information can be often be obscured).

I personally have found this a very useful means of measuring a person’s reliability and even skills, especially when I can interact with a person long enough that it becomes difficult for them to maintain a false facade. In such a situation, you have the opportunity to catch inconsistencies in statements unfiltered by third party reports, unguarded expressions, etc.

But I think it’s also fair to say that some people are terrible at making judgments about other people based simply on in-person experiences, especially when dealing with a person that is practiced at deception. I’ve witnessed cases where I was rapidly convinced that I was dealing with an untrustworthy person, yet been in the company of other people who found that person entirely believable (as a side note for those interested, one of the techniques these deceptive people often employ is effusive compliments, so be especially wary when someone compliments you about something that you yourself don’t believe to be true).

All this goes to say, your mileage may vary, when it comes to being able to judge people from personal interactions. As another anecdote, for many years, I felt like I could quickly detect when a person was intentionally trying to deceive me via various body and vocal cues, but then I had a meeting where I was listening to “difficult to believe” information for the span of many hours, and that person’s body language and speech never gave away a thing to me. The only way I was finally able to be sure that the person was attempting to con me was when I asked a critical question that underpinned the implausibility of what they were telling me, and they skillfully when into a digression for about twenty minutes that almost made me forget my original question. It was in fact the skillfulness of the deflection that fully convinced me I was dealing with a con artist, despite the fact that their body language kept telling me that they were trustworthy.

Rating based on domain expertise

Rating a person’s opinions based on that person’s domain expertise (knowledge and skills in an area) can be a tricky issue, in many cases. When you’re an expert in an area, you’ve usually learned ways to identify other experts in that area. But when you’re not an expert, it gets more difficult.

One way to tell an expert in an area is how often the answers they provide solve a problem in that area, but even this isn’t a perfect method, especially when you have little personal knowledge of the area. For example, if you have a computer problem, you may know someone who solves your computer problem every time, and you might conclude that they are a computer expert. But “computer expert” can actually mean lots of different things, and a “computer expert” in one area, for example setting up a computer to play video games, is quite different from a computer expert who knows about designing software, for example. And if you don’t know much about an area, you’re less likely to know where such subdivisions of expertise exist. And vice-versa, the more you know about the area yourself, the more likely you are to know how far an expert’s knowledge is likely to extend.

This is also one of the reasons we trust experts in an area to help us identify experts in related areas. We might not trust a general practice doctor to perform surgery, but it’s likely that they can do a better job of recommending a good surgeon than pulling a surgeon’s name out of a medical directory.

Can computers enable us to share and rate information better?

We’ve all seen the incredible impact computers have had on the ability of individuals to share information, in a way that was never before possible in the past. Your computer can help you to answer in minutes the types of questions that could require weeks and months to answer in the past (if you could even find the answers at all).

And, when used wisely, computers can already help tremendously when it comes to rating information.

I can’t begin to count how many times I’ve questioned the validity of something I’ve heard, then used a simple web search to get what I felt reasonably confident in concluding was the definitive answer on the topic. It’s a really great way to settle a lot of “back-and-forth” arguments in many areas.

But another thing that’s become increasingly obvious with time is that “you can’t trust everything you read on the internet”. This is because traditional methods that society has used to rate the quality of publicly available information has not been able to keep up with the explosive increase in the number of information sources on the internet.

Sock puppets: fictional sources of information

One of the biggest problems that has emerged with internet-based sharing of information is the creation of fake sources of information. As discussed previously, our opinions about the reliability of information are often swayed by consensus beliefs about information. This method of rating information generally works pretty well, because it allows us to rely in part on the brain power of other people. But it can also be “gamed” on the internet, when someone creates a bunch of fake identities to make it look like many people are in agreement about an issue.

Such sources are often referred to as “sock puppets” because the words and actions of the fake identity are being controlled by a single person, much as a sock puppet has no real thoughts of its own. Sock puppets can take many forms: fake people, fake organizations, and even identity theft, where the sock puppet masquerades as some person or entity that actually does exist in real life.

Identity verification and anonymity

Many online information sharing platforms have recognized the problem of sock puppets and have developed methods for trying to fight the problem, with varying degrees of success.

Hive uses one such method to determine distribution of post rewards: it uses stake-based weighting of votes to compute the rewards, so there’s no strong reason for one person to create many accounts and vote with all those accounts on a post, as dividing up their staked Hive among many accounts won’t change the rewards that are given.

On some platforms, such as Facebook, users are required to agree to only create one account for all their personal information sharing (they might also be allowed to setup an organization account as well), and the platform may require some proof of identity to create the account. But for most platforms, it’s difficult to fully verify the identity information provided and be certain that the person isn’t using a false identity.

A new problem is also created when relying on a single centralized vetting of identity information like the one above, because it gives an out-sized amount of influence to the vetting service. There are often economic forces that will drive such platforms to be honest in this activity, but it’s also easy to imagine cases where government interests could exert influence on them to make exceptions and allow fake identities.

There’s also a downside to requiring identity information as a requisite to information sharing: it’s not always safe or smart to share information and have it be publicly known that you are the person that sourced the information. It’s for this reason that “whistleblower” programs usually have mechanisms in place for preserving the anonymity of the whistleblower as much as possible.

Web of trust as a means of identity verification

An alternative means of identify verification is to use a web of trust system, and it’s probably the main way we validate identity in our normal lives.

Let’s take a brief look at how a web of trust system works and how we use them today to “know” a person is a real person. Let’s say you receive an email from someone who claims to be a young cousin of yours whom you’ve never met, but they are traveling from overseas to your area, and would like to stay with you while they are there. How do you decide if this person is who they say they are and if it is the kind of person you should be comfortable inviting into your home? Most likely, you’ll talk to other family members who would know this person, and find out what they say about them.

In this case, these family members are acting as your web of trust, and they meet some of the key characteristics that make for a good web of trust system. You’ve likely met and are pretty familiar with the family members you’re consulting with. They are domain experts with regard to your family members and some of them will likely know this person. You trust that they will generally be motivated to give you accurate information (although you might rate information from his mom at a different level of trust relative to other reports). If you receive generally good reports from several family members who’ve met this person (consensus), then assuming you’re not an extremely private person and have room in your place, you’ll likely allow them to stay for a while.

As you can see from the above example, we already use simple “web of trust”-based identify verification, and computers aren’t even required.

But the term “web of trust” implies more than just relying on the opinions of people you know directly. A good web of trust can enable you to rely on opinions of people you’ve never met and whom it may not ever be convenient for you to meet. This additional “reach” enables you to tap into the information rating power of more people, who will know about more domains than is possible for just the people you directly interact with to know.

In the case of identity verification, it works like this: you trust people you’ve interacted with a lot, and they vouch for people you’ve never met, and they also establish the validity of the communication channel with which you’re interacting with this remote person. Now that the identity of this remote person has been established, they, in turn, can vouch for people that they know, that your direct contacts don’t know. In theory, this method can be used to verify the identity of almost everyone in the world, as long as they don’t exist in a closed society that is cut off with contact from the rest of the world.

Now while the above system for verifying identity sounds great on paper, there are lots of problems that can lead to errors in such a web of trust system, but I’d like to end this post at this point, to leave the discussion in the comments section focused on the points discussed so far.

More to come

In my next post, I plan to take a look at some problems with extended web of trust systems, and how computer networks can help with those problems. I also plan to discuss how a web of trust can serve not only as a means of identity verification, but also as an alternative to identity verification in the traditional sense of the term.

Sort:  

If you take a hard look at it, it’s quite amazing how much of the information we rely on in our daily lives is the information we’ve been told is true by others, with little if any independent verification by us as individuals.

The past tense of Turkish verbs have different forms depending on how the speaker knows the act happened.
So 'Steve walked along the path' would have a different form of 'walked' if I saw him do it myself, than if somebody else told me about it, and I'm just passing along what I've been told.

At least that's what I heard on the radio once.

Yes, in a way it's interesting that every language doesn't more forcibly require these two different cases to be disambiguated.

The varying power of language to express different concepts is a fascinating topic in general. Two of my favorite sci-fi books on this topic are "Babel-17" and "The Languages of Pao". They both explore theoretical advantages and disadvantages that might impact human thought based on the languages used by the speaker. They're also just fun reads, of course :-)

“meeting in person” an especially important method of measuring a person’s trustworthiness.

So true.

I also plan to discuss how a web of trust can serve not only as a means of identity verification, but also as an alternative to identity verification in the traditional sense of the term.

Looking forward to it :)

I love the merit and forward thinking evolution of this concept.

While working today, I was thinking how this blockchain prints tokens like crazy without considering the quality of the content. I agonize over the content and not post something without baking the concept and fleshing out the bones before filling it with content like an article. I do have to refine my process and create articles with punch and not have the 4 pages each but that is my issue. Seems it is my networking rather than the content itself that generates any kind of reward judging upon the lack of comments/replies lately. I just never know.

Anyhow, looking forward to reading further posts and the discussion in comments here.

Thanks again!

Networking plays a huge role in our reputation and in the rewards we reap for our other actions.

It's understandable to some extent, since we can envision the case of a person who solves some tremendously important problem, but never tries hard enough to share it with the rest of the world. On the other hand, we can also see cases where someone spends all their time networking, but doesn't do much in the way of analysis, basically just acting as a human information replicator (and likely a flawed one at that).

Historically, the latter type of person was more useful than they are today, when technology has developed better ways for us to spread information, but I think the desire to reward social interaction isn't going to go away (and clearly still has benefits). In the past, I would guess that extroverts were rewarded more than introverts (another way to talk about less extreme forms of those two types of people), but I think that is changing with time and technology.

What a time to be alive!

I appreciate this comment so much.
I still think the communicators have value but the more evolved the network will be, we should see better productivity.
As long as the middlemen won't evolve faster like in facebook and google case in the last few years.

... we can also see cases where someone spends all their time networking, but doesn't do much in the way of analysis, basically just acting as a human information replicator

And they are often the top rewarded account on Hive :)

There are highly rewarded accounts on Hive that often act as interpreters of information from other posts, but often these accounts are making very useful contributions, I think.

I've seen many posters that reword popular posts in a different way that can often be more accessible to a larger audience. Such posters often expand on the information, show potential linkages to information from othe rplaces, and speculate on possible outcomes. To me that goes far beyond mere replication.

Yes you are right, I am a fan of these sort of posts which collate from various sources (perhaps other content here) and present them in an easily digestible manner.

I was referring to the posts which, for example, pick an old news article about the loss of X tokens, multiply them by todays price, and voila, 'new content'.

I was referring to the posts...

Ah yes, those kind of posts, yeah, I'm not fond of them either.

Excelente información. Me gusta como hace sus post. Su manera de hacer los mensajes lo hace especial. Cada día aprendemos más y más de las personas, eso es parte del aprendizaje en la vida.

Would this all lead into some form of Digital ID system and a means of controlling one's data?

I would imagine that this system could enhance privacy by using a "group" for verification as opposed to some powerful third party.

Maybe I am misreading where this is heading along with how it is applied.

One of the aspects of this system definitely allows a person to have one or more digital identities, and use that identity to authoritatively take actions that will be respected as coming from a specific underlying person (or corporate entity, for that matter). Identity plays an important role in web of trust systems, since it allows us to rate the information sources.

And yes, you're entirely correct that this uses a group rather than a single authority to verify identity.

But it's fair to say that the uses of such a system extend far beyond just verifying identity.

Vitalik recently shared a post on improving reputation systems despite anonymity - https://ethresear.ch/t/anonymous-reputation-risking-and-burning/3926

It doesn't look for me really useful. Which benefit in real-world use cases it can bring besides from "hey I can trust this buddy, because of his score"

The idea of trusted groups ("web") makes a lot of sense to me.

I only had a suggestion for a name if you're not going with hivemind; merit.

Suggestions for names are welcome, but it's probably worth delaying suggestions until my next post, when it will hopefully become more obvious how far reaching the scope of this project is.

Merit would be a great name for a reputation system, but it's less descriptive when addressing a truth rating system based on a web of trust. I favor hivemind so far because it captures the concept of leveraging the power of other people's minds.

Merit, nice.

Well you weren't joking this was a long one!

But for all the undoubted benefits of communication, it does come with additional problems, compared to direct perception. As its most basic problem, people can intentionally lie about events they have directly perceived or distort the information to suit their own purposes by framing the events in a biased way, omitting relevant information, etc. They can also simply make up things without any basis in reality at all.

I think having Blockchain technology and a network like Hive can definitely combat the way information is stored, shared and documented. I've recently learned the hard way that once it files to the blockchain, it's on it whether you edit it or not. (noob mistake, I know, for someone who's been here for a bit lol) Thankfully it wasn't a serious offense or anything, but something I've held off engaging with and am not thrilled about. Water under the bridge though.

In the same manner, Hive and blockchain can be an official place for people to post things like statements, videos and other things and once it is on there in the form it's presented, we can see the original as well as any edits. This will definitely help in the era of Deep Fakes and all kinds of doublespeak that goes on. Additionally with the intentional modification of information and historical events by companies that have a lot more power than we realize, the permanence of information is crucial. Ever wonder why some books have different editions? There were editors that decided, themselves or asked by the author, that something needed to be removed. Some of the information removed can be innocuous but some can be absolutely vital to have in there. Kind of like the destruction of our history by trying to simply delete it, remove it or get rid of it, we need to have it there so we can see it and think what we will.

One of my longest comments in a while! :D. I am really passionate about this topic though, love that you are as well!

Would you rather be friends with a very helpful and nice Deep Fake (not knowing) or with a Human as they come...sometimes good and sometimes bad?

Human every time. The human element can't be replaced. Some suck, some are great but we are still human. We may disagree but that’s the point, we will never all agree but we need to navigate these differences like we are supposed to.

I understand your reasoning, you must be very compassionate as a person.

Good question. Today the deep fake serve other humans. So the nice and helpful part could be to hide another interest.
But when fakes will be real, I mean real autonomous bots, I might prefer them.

That's a critical point! If the bot/fake is autonomous, that condition must be fulfilled.

Yes, the ability to have a provable record of historical actions like that provided by blockchains can be very useful in a system that does analysis of web of trust systems. I think this is a wide open topic for research as part of the design of a rating system.

Loading...

There have recently been highly creative ID deception and theft attempts intercepted. In one, the perpetrator took over existing social media accounts of family and friends (that were otherwise abandoned), either by forcing them or by buying them. In another one, the perpetrator created social media accounts for their victim. These two examples involve real persons. In many other cases, fictional profiles are created based on individuals. In even more cases, the perpetrator claims to act on behalf of an individual. This is most common with accounts they claimed are 'managed'. The victims normally have no idea what's going on or don't understand their identities have been monetized. Hivewatchers deals with all these but even then its hard.

Related, to your point of identity verification, I recently attempted a different approach at Hivewatchers with an account that was accused of being a fake ID. What I did was go through all the messages sent by the person. When reading them back to back, you see that on one day the person is happy and answers one way. On a different day, the person is upset and answers another way. They will give different answers to the same question based on what's going on. A fake account or a thematic account (a type of 'managed' account) can't do that. It will always respond in a similar sort of way because the person is acting. It will have some variation, but it will be minor and playing to a template. Me revealing this here won't give anyone the upper hand in defeating the ID verification procedure; its beyond their acting capability.

As you know, in general, there is a lot of misinformation on Hive. People who spread it are interested in spreading it because they build up their fan base off it and create an echo-chamber. Everything is misread and twisted. We're used to seeing it in general but it's particularly detrimental when it's about Hive itself, as it affects the entire ecosystem. For example, that there's some mythical secret group deciding which proposals get funded. Or that the top witnesses are colluding. Due to the sheer number of these people, using crowd-sourcing to gauge the credibility of information while drawing on the same groups that have been misinformed won't work. Many "confirmed facts" aren't based on experience or education, just guesses. People make observations, they grow to believe them, others confirm their ideas, and now there's a majority view.

What I'm getting at is that a system that tests for variance in content, wording, subject matter and range of use by an account could be the start of a good web of trust. A real person of trust will have an opinion that changes, they will be different on this day from the previous, they will try different dapps, they will talk about things in their life and grow as a person. A fake one won't.

That is a very interesting comment.
I think any human pattern can be simulated by bots. Random as it looks.
But this is indeed an undecided race.

In many other cases, fictional profiles are created based on individuals. In even more cases, the perpetrator claims to act on behalf of an individual.

In the system I'm planning, creation of a convincing fake account that masquerades as another person will be quite difficult. First, that person will need to create multiple fake accounts, because an account's authenticity is tightly correlated with other accounts that vouch for it. But of course it is not difficult for a determined identity thief to create many accounts and have them vouch for each other.

And here's where the serious difficulty comes in: in order for any of the information being reported by these accounts to be taken seriously by a person's local rating system, no matter how many sock puppets the identity thief controls, they have to convince other people in your trusted network that these are all real people.

Now, you may have one gullible friend in your trust network who's willing to vouch for someone's identity based on insufficient information, so we can already see ways that such a system can potentially be gamed. But I plan on going into more depth on attacks and defenses that a web of trust can deploy to mitigate these attacks in my next post.

Related, to your point of identity verification, I recently attempted a different approach at Hivewatchers

In this paragraph, you're describing an alternative method to using a web of trust to verify information (not just identity information, but all information). In this case, you're using your own critical thinking faculties to analyze the data you've received, looking for contradictions and patterns that can give you clues to whether the person is truly who they claim to be (the "truth" of their claim).

We all use critical thinking to some extent to rate information we receive, and it's how we contribute our own thinking power to the "hivemind" computer of our web of trust network (the people with who we directly or indirectly share the information and opinions we have).

If none of us employed critical thinking, webs of trust would only be able to rate very basic types of information like what we've seen and heard, and they wouldn't be able to rate what I've referred to as "higher-level" information.

What I'm getting at is that a system that tests for variance in content

What you're describing goes beyond the scope of what a web of trust system does. You're describing an analytical engine that helps a human spot patterns. It's an aid to our native critical thinking ability.

Such systems can certainly play a role in helping us to properly rate information, in a similar way to the way a calculator can help us do math, and that sort of checking helps us to rate information more precisely, before we share our opinions with others via our personal web of trust.

In reverse order:

What you're describing goes beyond the scope of what a web of trust system does. You're describing an analytical engine that helps a human spot patterns.

Yes. My interest is in building systems as you likely know by now. A system can either learn or be adjusted based on the output it receives or its output parameters can indirectly adjust user behavior. This was done back when my post highlighter ran for the SSG community. Some users started formatting their posts to be picked up by the bot which looked for a few 'quality' measures.

And here's where the serious difficulty comes in: in order for any of the information being reported by these accounts to be taken seriously by a person's local rating system, no matter how many sock puppets the identity thief controls, they have to convince other people in your trusted network that these are all real people.

This just gave me an idea. One of the ID scammers that I alluded to here and mentioned privately was confirmed to be fraudulent by an individual geographically local to them. The individual was another long-time user from the same hometown who confirmed that its impossible for them to have the credentials they claim. Geographical coordinates of supporting or refuting parties, where they choose to provide them (such as in profile) or knowledge of a particular area, may be incorporated in the weighing algorithm.

Geographical coordinates of supporting or refuting parties, where they choose to provide them (such as in profile) or knowledge of a particular area, may be incorporated in the weighing algorithm.

Yes, when we start looking at using a web of trust to help analyze truth in various domains, one of the more "advanced" topics can be to look at how information in related areas can impact opinions about a specific truth (in the example you mention, proximity when verifying identity). To solve problems like this in a general way is a very challenging problem, but coding for the use of specific, logically related information to help rate other information is clearly possible.

Glad to see in the comments you are not seeking 'social' score. Lots of info in the post, and the important question in comments bought out more info. I, for one, am looking forward to the future post on this topic.

Interesting to see what sort of information subjects will become popular for this type of system.

I am guessing people will have the option to post information here they would like rated for further accreditation for their combined works, or they can avoid it if their information/opinions are not open to be reviewed for ratings or criticism. Am I guessing right?

Information topics and means of sharing information that are subject to analysis by a web-of-trust of this type is a very open topic.

Initially, I think it's likely that it could be used to analyze specific domains of information (such as "real identity" or "digital identity" verification).

But looking beyond that, it could also be used to rate the quality of specific posts on Hive, for example. But I should be clear, that is by no means a major focus of this project, it's just an example of how it could be used. Ultimately I think it can be useful for rating most of the information we have, but figuring out how to best get to that point in a practical way isn't completely clear to me yet, it's another research topic in and of itself.

But to answer your question, if we look at that scenario (using it to rate posts on Hive), there's no real way to stop other people from rating your posts using such a system. You wouldn't be able to "opt out", because the analysis takes place on their own computers, although you could register your objections to such rating, I suppose.

Perhaps there would be a way to tag such posts where this rating system was less important. Many genres such as fantasy writing, philosophy, religion, politics, advice, predictions, etc.. These topics were never meant to be argued with scientific facts. The "what if" questions are often the purpose of the writing, to help people explore the possibilities.

I think blocktrade you should update the types of withdrawal crypto coins that you support because currently it is too limited

I'll give u my support po

Good news to hear because no one on earth can do without information, and already information/communication is now regarded as the fifth factor of production

Information communication has always been the fundamental building block of human civilization.

Computers and networking technology have already radically changed how we communicate and process information, but we still have a lot to improve.

Automated 'Data' discrimination is one of the key functions of our Brain. This topic generally is very philosophical on its face. I'm too small on HIVE to drop any masterplan here in the comments, but let me thank you for sharing this.

To me, an entity that proofs itself by reliably adding value to its companion's lives (on-chain in that case) should be considered increasingly trustworthy. Doesn't even matter if bot or human.

Ok ... give me a few days to comment :)

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

Your post got the highest payout of the day

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Sir you have a very important discussion about hive block chain rating points. Which will play a very important role in letting us know about it. Your beautiful article will help us to know more about Hive Block Chain. Thank you so much for gifting us such a beautiful article.

First, it can name hive score.

For truth information,

I would stack content on the same topic and bundle it under the recommended section on the content page. So people can read more from more sources and don't need to trust some point system.

IMO it is impossible to rate it in a trustworthy way. And if it's trustworthy, people will start to game the system.

The only way it would add trust if, for example, a community builds a reputation they are trustworthy. With SMTs this can add some more trust and not the only rewarding.

I guess there's one thing I should make one thing clear at the outset: while this system has some slight relationship to the current reputation system on Hive because I hope to replace that reputation system with this one, they operate entirely differently.

As simple of example of the profound difference in the two systems, note that people will get individualized rating information. In other words, the rating provided by your system for a piece of information can and often will be different from the scores seen by other people.

Nonetheless, I'm sure there will be attempts to game any reputation system and the design I'm envisioning will still involve some personal responsibility by people to manage how well their rating system works. In fact, I think there will be some competition created to personally improve the performance of individual rating systems, to act as reliable sources of information for others, as this itself can be one way to earn reputation and influence.

But is a reputation system needed? It sounds to me like something like a trust score you can trust, but you shouldn't trust always.

If people think about what they've read, should be work better. I can only talk for myself, but I don't care about the reputation points.

But I get the point, it's easier to filter bad (extreme) content out if people have a bad reputation. But all positive reputation is under the line the same I would expect.

From this site, I would agree that's needed.

Offtopic: Besides Upvotes, I would like to see a like button. Why? Upvote is IMO more of some supportive thing and alike can anyone place. It would maybe people feel more comfortable using Hive. Because people all the time care about rewards.

If the reward would be "donation/support vote", it's hard to blame this.

I think with the current distribution of hive, we will never see a high payout make up tutorial or something similar.

But that's what average users care about. So alike can make the content creator feel good without high payouts. I would see no negative (besides spam), it can be in any case a positive thing.

About spam, hive needs to accept, social media = some kind of low-quality content/spam content. If some billy makes pictures from cats, its close to spam :D

A rating system with such importance placed on it certainly brings up this idea of a "social credit system" which is justifiably troubling to many of us. But if the standard for which people are judged is decentralized we could all potentially decide our own ideas about what constitutes trustworthiness and build communities based around those ideas. (I believe Holochain is working on something like this, though I'm not sure.)

A simple version could be a "Leo Finance score" a "3speak score", a "Natural Medicine" score, and even a "Blurt score" etc. You'd have to create a dynamic kind of overall score based on this though, otherwise you'd run the risk of different score systems being at war with each other, where in certain in-groups, a low score at one group would be something people are proud of, and this creates a pretty hostile ecosystem not all that different from what we are seeing right now in the US. A "Left sider" may be proud to have a low "Right side" score and if this is the case, people may be forced to choose a side which will escalate conflict...very ugly.

If you can find a way to make an overall score that does not require high scores across different communities for an OK overall score, but requires a high score in a variety of communities for a GOOD overall score, and a good score in communities that typically don't get along for a GREAT overall score, then you might be on to something.

Ideally the reputation system could be built independently of how many resources we currently hold (I personally can't imagine anything that follows DPoS really improving upon what we have right now), otherwise it will always favor those who currently have power. It would be nice if it could leverage power and encourage it to perform in line with the will of the community, while also not forcing it to do so. Reputation becoming more volatile as it increases? That sounds like an interesting and controversial topic, but maybe one worth exploring? We may be able to fix what's wrong with DPoS by balancing it out with such a leverage system. As it stands, I feel the only way DPoS works is with decent whales, and luckily we have a few of those but that may not always be the case.

We don't need to make reputation a competition and so making it almost impossible to maintain the best scores could keep people honest long-term.

ID verification? I'm ok with it for myself but I don't want to force people into it, especially when we aren't all in the same situations IRL, some can afford more transparency than others. ID verification only being a small factor could be helpful. Extra 5-10% score for verification.

If you could find a way to de-incentivize ass-kissing, that would also be fantastic, though it's not as fundamental as the concerns above.

Looking forward to seeing the basics of what parameters would be used to decide reputation (start with the less technical for all of us who are bad with math :-P ).

A rating system with such importance placed on it certainly brings up this idea of a "social credit system" which is justifiably troubling to many of us.

A simple version could be a "Leo Finance score" a "3speak score", a "Natural Medicine" score, and even a "Blurt score" etc.

The system I want to build is quite different from a "social scoring" system, and even more individualized than you're suggesting in your second paragraph.

The web of trust system I'm envisioning will actually give you personally unique results, different from those of anyone else, unless you by chance or on purpose configure your personal system the same as someone else.

This system will allow you to specify how it rates information, and allows you to modify how it computes ratings, based on your experience with it over time. The potential variations in how an individual's rating system works are nearly limitless, although there will be some commonalities in how they function.

You've got me very interested. I think it would have to encourage people from different perspectives to make peaceful relationships despite their disagreements without pressuring people to give up their ideas. I have no idea how you'd do that but maybe I will once I see the dummies guide to your rep system.

It sounds very similar to what I heard from someone at holochain. You might want to check them out. Don't know how you feel about them but more collaborations with other strong communities would be nice if the goals are the same. I will try to find where they mention that.

I just reread your post, really worth the time. If you manage to solve this Challenge, you can go ahead and try to create a permanent voting simulation. That would be something very important to have for the world we are in right now. Did you read More Equal Animals already?

I haven't had time to read Dan's book yet, but I worked with him for a few years, so I'd guess I'm aware of many of his ideas. Still I'll definitely make time at some point to read it, since I'm sure his ideas have evolved quite a bit since then. Dan's a very intelligent guy, but I think he frequently misunderstands most people, or at least fails to predict their behavior well, simply because he's quite different than most people, and he often seems to miss where those differences exist.

On the voting subject, I thinking that a rating system has interesting potential implications for voting, governance, and even economics. But that's really getting into the speculative area, so I doubt I'll be talking about that much until we have working prototypes.

No offense, but this will fail. Why?

The scientific method as a means of rating information

You use this site, right? You see how many conspiracy theories end up on trending DAILY, right? All those posts DENY the scientific method.

Which is highly amusing seeing as, much like the consensus of scientists, the consensus of the blockchain is what makes blockchains work. But it attracts these scientific consensus deniers!

I'm aware that there are people that dispute the validity of the scientific method. Nonetheless, scientific progress continues.

For similar reasons, I think tools like this that help automate and improve our normal cognitive processes will be useful to human progress, even if not everyone will use them or even if they abuse them to create echo chambers.

I don't view this as a mechanism where its success or failure rests solely on its ability to achieve universal consensus of opinion. I'm not that naive about human nature. And in some areas I don't think we even have enough information to reasonably achieve consensus of opinion, unless maybe the consensus opinion is "we can't be sure".

I was feeling sleepy while reading. I would read again 2-3 times before understand this all or some. Great Stuff.

Me parece un buen planteamiento.

Really enjoyed the read, it's a very interesting topic. I also like the anonymity aspect as without it this system would be terrible and run into politics. For example, on hive, if I know freedom has the biggest stake, then I have to make sure I give him a high score in every single category. That would possibly buy me his witness votes, or maybe not, but surely scoring him as terrible in terms of trust will not make him want to vote me as a witness.

So assuming that there is enough privacy with the ratings, I think this can be revolutionary. However, this feature must be optional for communities. A community must be able to decide if it wants an algorithmic trust rating for its users or not. Forcing that on everyone will cause people to avoid using the product for fear of being scored negatively.

There are also some fundamental issues with such a system:

  • The accuracy of the algorithm cannot be measured using the algorithm, so there is a fundamental assumption (like axioms in math) that the web of trust delivers accurate ratings.

The accuracy of the ratings can only be deemed good or bad by people or the developer himself. As you have amazingly demonstrated in your post, people are not good at that. It is very easy for two devs to develop different algorithms for a web of trust. Each web of trust will show different scores for the same people. And then, people will chose one algorithm over the other. How to choose which web of trust is the most accurate? It all comes back to normal life and normal information sharing.
If there was a feedback loop where a web of trust is implemented, we look at the trust scores, then judge whether it works or not, then it means that we could improve it. However, no one can judge if the scores are correct or not, so we will never know if a scoring system is accurate or not. It will all boil down to personal beliefs, and choice of different scoring systems, depending on who you trust in real life, similar to basic information sharing we have now, without any scoring system.

  • People are tribalistic and stupid. They believe what they want to believe, no matter what trust score you might show them.

Even if the web of trust perfectly scores everyone, people will not take it into account. For instance, we saw recently with the US elections how some people thought it was rigged, and others think that this is ridiculous statement. While the truth might seem obvious to some, it is obviously the opposite to others. I am certain that no matter what score Trump has on a web of trust, his followers will believe what he says is true and believe the election was rigged. Even if every poll expert says it wasn't. And this applies for both sides. Also, this provides a challenge for the algorithm calculating the scores: how to score sources where ratings are extremely polarized? Where 50% of the people say it's flat wrong and 50% say it's absolutely true.

On another note, this post made me think of Bridgewater, Ray Dalio's top performing hedge fund. I would highly suggest listening to his ted talks if you haven't already. He has implemented exaclty what you are proposing inside his company. People rate other's based on their viewpoints on the economy and the market. Trust scores are then calculated, he calls it "believability". Then, trading/investing decisions are made by the algorithm that takes into account everyone's believability. Bridgewater has the best performance over the long run out of all the hedge funds. One key difference is that they have real world feedback. If the fund makes money, it means the algorithm is useful. If they lose money, then maybe it's time to tweak it.

You might argue that in our case we can judge the success of a web of trust by the real life "success" of the community using it. But this feedback loop can take years or even decades to take effect in society. For example, the US monetary policy has brought wealth to the country for decades, despite trading deficits, and a reducing productivity in the country. How would the FED be rated in a web of trust? What happens when it all crashes?

Edit - I just read one of your replies in the comments. If I understand it correctly the scores will be different for everyone and will be personalized by the user himself, so all my points above are not valid. Essentially it becomes a tool that allows you to quickly identify who to follow and trust on social media. But it won't be much more than that. It is definitely useful, but can also make people live inside their own bubbles, kind of similarly to what big tech does today. Users will infinitely tweak the algorithm until they get recommended users/posts they enjoy.

In this first post, I intentionally didn't say a lot about how rating software could help our current methods of rating information or how it would work, because I had a lot to say about how we rate information now, and I hoped to keep the focus on that (and separate the discussion from talks on a rating system itself, since it's likely more controversial and logically distinct).

I probably should have been clearer on this point in my closing section, because most of the post comments are still based on expectations of how a rating system might work and a discussion of it's potential/perceived flaws.

Maybe I can take that to mean that there's relatively little disagreement in my interpretation of how we rate information today, which I suppose is a good thing, if it means we have some consensus agreement on much of the information shared so far.

Anyways, my next post will explore some of the concerns you've raised about potential implementations of a rating system.

What about the Bridgewater/Ray Dalio method? Have you heard of it before? It might be interesting to you.

Looking forward to your next post.

I hadn't heard of it, but it does seem to have correspondences with the rating systems I'm envisioning.

I should also add I've found portions of many of my own ideas espoused by other people in other research literature when I started doing literature searches, sometimes with very similar ideas, even down to shared ideas about specific mathematical techniques that might be useful to analyze web of trust data.

But back to the Bridgwater method, updating this type of rating system based on performance is also possible in most information domains and IMO designs for a rating system should include plans for allowing feedback on current results to influence future results. Admittedly measuring performance won't always be clear cut in all domains, and will probably depend on Darwinian theory to some extent as well.

WHEN Direct messages will be allowed? Can I write a plan of HIVE development(marketing)? @blocktrades can I write to you directly in telegram?

I'm not particularly involved in marketing of Hive, so I'm not the best person to contact. @guiltyparties has set up a mattermost channel for real-time marketing discussions; that's probably your best venue, so you should probably contact him to get added there.

Getting back on hive very soon.....

Excellent project idea. This punches right into epistemology - which is how we know what we know. It'd be great to either be quite opinionated about your epistemology, moreso than the article above outlines, or be a bit more agnostic about it and allow users to set things.
A big part of consensus forming is in communities of discourse - that is what that community talks about, what they assume to be true and how that community tests new information. This matter because different disciplines have different ways of doing that and for good reasons. For example, the scientific method is great for many things to do with the observable world, is somewhat okay for social questions and terrible for philosophical debates.
There's already some work on rating knowledge in the semantic web community and it might be a good starting point to work on from there. I wonder if @grampo wants to weigh in?

Sorry for the late reply, but sometimes I'm not inspired with an immediate response to a comment, but maybe better late than never.

Epistemology is a philosophical debate and it certainly is directly related to the ideas of this post.

And I will agree that the scientific method has little to say about philosophical debate (similarly it has little to say about math, although math can have plenty to say about science). I only meant to use the scientific method as an example of one way we can employ critical thinking methodologies, as opposed to relying on the opinions of others. Logic is another, and that one even works in philosophical debates.

As to why I didn't go deep into epistemology, it's because I haven't spent a lot of time studying the philosophical arguments of others in this area, so I don't consider myself an expert (although it wouldn't shock me if I get forced down that route at some point since we already see commenters claiming there are many types of truth).

But I'm not really proposing a definition(s?) of truth, I'm just looking at the issue pragmatically, in terms of how we mostly rate the truth of information now, and how we could improve those methods. Philosophers can and probably will argue for our entire life times about what it means for something to be true, but in the meantime we'll all be using our own methods for deciding what we believe to be true.

Sensible to remain pragmatic, epistemology is an abyss. Being explicit about your assumptions is plenty!
Justified True Belief is a common framework for knowledge and it appears you're proposing to use discourse (discussion, voting, user ratings) as a means to decide justification. It sounds like you're looking at web-of-trust/reputation to help qualify the justifiers - which will have a similar effect to what is meant by communities of discourse. Cool.
Actually really excited about this project.

nice to see your aim