In this article we develop the basic mathematical formula for calculating the opinion of the meta-reasoner in arguments involving a single main argument thread.
Suppose a sensational murder trial is being discussed in an online platform that allows the general public to vote on what they think the verdict should be and why.
Initially, 1,000 users vote on the root claim (𝐴) the defendant is guilty, before any discussion has taken place on the platform. Then after this initial vote, somebody submits an argument claiming (𝐵) the defendant signed a confession, and users are asked to vote on this claim.
150 out of the 1,000 users vote on 𝐵. Of these 150 users, a small number changed their vote on 𝐴 after voting on 𝐵 (presumably, because they found 𝐵 convincing).
The final votes are tabulated in the following table. We represent votes using the numeric values 0=reject, 1=accept, and -1=didn’t vote.
A=-1
A=0
A=1
SUM
B=-1
0
455
395
850
B=0
0
25
25
50
B=1
0
20
80
100
SUM
0
500
500
1000
B≥0
0
45
105
150
According to this table, all 1,000 users voted on 𝐴, with 500 rejecting 𝐴 (𝐴=0) and 500 accepting 𝐴 (𝐴=1). But only 150 users voted on 𝐵 (𝐵≥0).
Raw Probabilities
Our first step is to convert these counts into probabilities
Let’s define a function 𝑐 that returns the values of a cell in this table. For example:
From this, we can define a function 𝑃 that tells us the probability that a random user voted in some way. For example:
We can also define conditional probabilities, for example the probability that a random user accepts 𝐴 given they accept 𝐵 is:
We can calculate conditional probabilities just by taking the ratio of counts, because for example:
So
Note that, represents the probability that a randomly selected user, from among those who voted, votes in some way. If this sample of users is small or biased, may not be a good estimate of what an average person actually believes. We will ignore this detail in this article, but address it in Bayesian Averaging.
Informed Probabilities
is only 50%, but is 80%. This means that users who accept claim 𝐵 are more likely to accept claim 𝐴. So 𝐵 apparently is an effective supporting argument for 𝐴. On the other hand = 25/50 = 50%. Users who reject 𝐵 are not more likely to accept 𝐴.
Notably, among users who either accept OR reject 𝐵, 70% of users accept 𝐴:
While only 50% of users accept 𝐴 overall. Apparently simply voting on 𝐵 made users more likely to accept 𝐴.
What’s happening here is that, among users who voted on , a large number accept as true, and as we’ve seen users who accept are more likely to accept . What makes the group of users who voted on different is that all of them are informed about 𝐵. Whether they accept it as true or not, they have at least been presented with the claim that (𝐵) the defendant signed a confession and had a chance to reject it, or to accept it and revise their belief accordingly. This is not necessarily the case for the larger group of users: perhaps the media coverage of the murder never mentioned any confession, and most users never learned about it until they were asked to vote on claim .
This is just made-up data, but it is meant to illustrate something that is often the case in reality: arguments can change minds – especially if they provide new information.
Our goal is to calculate the beliefs of the Meta-Reasoner: a hypothetical fully-informed user who shares the knowledge of all the other users. So the opinion of users who voted on 𝐵 is probably a better estimate of a fully-informed opinion.
So we’ll call the users who voted on 𝐵 the informed users, and our first step is estimating the beliefs of the meta-reasoner will be to represent the opinion of the average informed user with the informed probability function :
So
Which we have already calculated to be 70%.
The Law of Total Probability
The informed opinion on 𝐴 depends on 1) the probability that an informed user actually accepts 𝐵, and 2) the probability that a user who accepts 𝐵 also accepts 𝐴. In fact we can rewrite the equation for in terms of these probabilities. Since the set of users who accept 𝐵 and the set that reject 𝐵 partition the set of users who voted on 𝐵, the law of total probability says that:
We have already calculated and above, so it remains only to calculate :
⅔
Plugging these values into , we again get 70%.
⅔⅔
Formula is important because it shows us exactly how the probability that users accept 𝐵 determines the probability that they accept 𝐴. And critically, it shows us what the probability of accepting 𝐴 would be if the probability of accepting 𝐵 were different.
Distributed Reasoning
Now suppose a second group of 10 users holds an argument about whether to accept 𝐵, and during this argument users voted on claim 𝐺, the signature was forged. And suppose these users unanimously accept 𝐺 and found it very convincing: only 1/10 users accept 𝐵 after accepting 𝐺.
Clearly, the opinion of the meta-reasoner about 𝐵 will be equal to the opinion of the second group of voters, since this opinion is more informed, reflecting any new information conveyed by 𝐺.
Let’s define a function that gives us the beliefs of the meta-reasoner. The beliefs of the meta-reasoner about is the informed opinion on , which is the opinion of users who also voted on :
Let’s put the vote counts from the sub-jury in a table:
B=0
B=1
B ≥ 0
𝐺=0
0
0
0
𝐺=1
9
1
10
𝐺≥0
9
1
10
And now we can calculate:
Recall that tells us how belief in determines the first group of users’ belief in . So to calculate the probability that a member of the first jury would accept 𝐴 if they held the beliefs of the second jury about 𝐵, we simply substitute of in place of in :
Plugging in the numbers:
The meta-reasoner’s belief ₕ is very close to – the average belief of users who voted on 𝐵 but rejected it – because a fully-informed user would probably reject 𝐵.
Causal Assumptions
Conditional Independence
Formula is only valid if we assume the meta-reasoner forms their belief about (𝐴) the defendant is guilty entirely based on their belief about (𝐵) the defendant signed a confession. So their belief in (𝐺) the signature was forged does not effect their belief in 𝐴 directly, but only indirectly through 𝐵. In other words 𝐴 is conditionally independent of given 𝐵. We discuss the justification for making these causal assumptions in the Meta-Reasoner.
Unfortunately, we can’t make the same sort of assumptions about (𝐶) the defendant retracted her confession. 𝐶 does not effect belief in 𝐴 only through 𝐵: learning that the defendant retracted her confession may make less of an impression on a user who never believed the defendant signed a confession in the first place. So the effect of accepting 𝐶 on a user’s acceptance of 𝐴 depends on whether or not that user accepts 𝐵.
The reason we can make the conditional independence assumption about 𝐺 and not 𝐶 is that 𝐺 is the premise of a premise argument, whereas 𝐶 is the premise of a warrant argument. The difference between premise arguments and warrant arguments is discussed in more detail in the Argument Model.
Formula for a 2-Argument Thread
Our next task is to calculate the opinion of the meta-reasoner after argument 𝐶 has been made.
First, we need to update our definition of the informed opinion. Previously, we defined the informed opinion as the opinion of users who voted on 𝐵; now that we have a second premise 𝐶 in the argument thread, we should include 𝐶 in the definition of informed opinion.
However, for users who reject 𝐵, what they think about 𝐶 is irrelevant, because (𝐶) the defendant retracted her confession is only argued as a way of convincing people who accept (𝐵) the defendant signed a confession that they still shouldn’t accept 𝐴. We discuss this important concept in the the section Argument Threads are Dialogs in the argument model.
So we’ll define the informed opinion as the opinion of users who either reject 𝐵, or accept 𝐵 and have voted on 𝐶:
We can then rewrite the formula for using the law of total probability and some probability calculus. The derivation is similar to the derivation of and is shown in the appendix:
Now, suppose a third sub-jury holds a sub-trial about whether to accept 𝐶, giving us . We can then plug in the opinions of the sub-juries and in place of and | in :
This gives us us the posterior belief of the meta-reasoner as a function of the prior probability function and the evidence from the sub-juries and .
Using the shorthand to refer to the formula in , we illustrated this calculation in the chart below:
To show a sample calculation, suppose we obtain the following probabilities for users that have voted on 𝐴 and 𝐵, and 𝐶.
𝐵
𝐶
𝑃(𝐴|𝐵,𝐶)
0
-1
50%
1
0
80%
1
1
65%
And suppose that the beliefs from the sub-juries are and . Plugging these into :
Intuitively, this result reflects the fact that, although is an effective argument () and the sub-jury mostly accepts it (), C is a fairly effective counter-argument ().
Formula for Long Threads
To generalize , we first rewrite it in the more easily-generalizable form:
Now suppose underneath the claim α there is a thread with 𝑛 premises . Then:
Note this function is recursive. The recursion terminates when it reaches a terminal claim in the argument graph – a claim without any premise arguments underneath it – in which case β will be ∅ and the function will therefore return
or the raw probability that a user accepts the terminal claim α.
Next in this Series
We can now calculate the posterior beliefs of the meta-reasoner for fairly complex argument trees, comprising arbitrarily long argument threads, and arbitrarily deep nesting of juries and sub-juries. But what about cases where there are multiple premise arguments under a claim (each starting a thread), or even multiple warrant arguments under a premise argument?
We will address this issue, as well as the problem of sampling error, in the article on Bayesian Averaging
Appendix
Derivation 1
Let’s define a new variable that indicates that a user has participated in the sub-jury and voted on G:
≝
Note also that all participants in the sub-jury vote on 𝐵, so
Our causal assumptions are that:
simply voting on 𝐵 (and thus being informed of the arguments for/against 𝐴) effects probability of accepting 𝐴 and.
𝐵 is the only variable that directly effects the probability of accepting 𝐴 (the (#conditional-independence)[conditional-independence] assumption).
These assumptions give us this causal graph:
𝐽 → 𝐵 → 𝐴
We previously defined
We now want to calculate
That is, the probability that a user who voted on B would accept if they voted on (even though no user has actually done so).
Every online community has rules that determine how the attention of the community is directed. For example in an online forum, the most up-voted posts may b...
Definition of Relevance
In the previous essay in this series, we introduced the basic ideas and terminology of Bayesian argumentation, including the concept...
Argument and Information
In the previous essay in this series, we introduced the idea of relevance, and said that a premise is relevant to the conclusion if...
Why Accept the Premise?
In the previous essay in this series, we defined the ideas of necessity and sufficiency from the perspective of a Bayesian rational ...