Proving Conjectures in First-Order Logic
Ingredients
- Here we introduce terms like variables, constants and quantifiers.
- Furthermore, we have functions of terms which are also terms.
- Predicates are boolean functions.
Considering the same example used in propositional logic. The example is valid for any two people that know each other. In this case, propositional logic requires to list all people knowing each other. First-order logic allows to compactly represent this fact, namely For every x and y, if x knows y, then y knows x.
Therefore, first-order logic is more expressive than propositional logic.
FIRST-ORDER LOGIC IS UNDECIDABLE. Proof. given by Alonzo Church in 1935 (note that here you cannot use truth tables)
Definitions
- Literals are predicates.
- A clause $C$ is a disjunction of literals $l_1\lor l_2\lor\dots\lor l_n$ or equivalently a set of literals $\{l_1,l_2,\dots,l_n\}$.
- A clausal form (expressed in conjuctive normal form, CNF) $\Sigma$ is a conjuction of clauses $C_1\land\dots\land C_c$ or equivalently a set of clauses (a set of sets) $\{C_1,\dots,C_c\}$ specified by a universal quantifier for the variables, viz $\forall$.
Question. How can we check the validity of conjectures (or equivalently refute conjectures)? Also in this case we can derive an inference rule based on resolution. But firstly, we need to introduce the notion of substitution, unifier and most general unifier, namely:
Substitution. A substitution $\sigma$ on the set of variables is performing a renaming/assigning operation to all variables in the set. E.g. we have an expression $\phi$ defined over the set of variables $\{X,Y\}$, $\sigma=\{X\leftarrow a, Y\leftarrow Y'\}$ applies the operations to $\phi$. The new expression is identified by $\phi\sigma$.
Unifier. A substitution $\sigma$ is a unifier for two expressions $\phi$ and $\psi$ iff $$\phi\sigma=\psi\sigma$$ If two expressions $\phi$ and $\psi$ have a unifier, they are unifiable.
Most general unifier. A substitution $\sigma$ is more general than a substitution $\tau$ iff there exists a substitution $\delta$ such that $\sigma\delta=\tau$. E.g. $\sigma=\{X\leftarrow a, Y\leftarrow V\}$ and $\tau=\{X\leftarrow a, Y\leftarrow c, V\leftarrow c\}$ then $\sigma\{V\leftarrow c\}=\tau$. The most general unifier is the most general one.
Two clauses without common variables clash iff there is a literal $l_1$ in $C_1$ and a literal $l_2$ in $C_2$, such that $l_1$ and $\neg l_2$ are unifiable. For example, if $C_1=\{P(X,Y),\dots\}$ and $C_2=\{\neg P(X',a),\dots\}$, then $P(X,Y)$ and $P(X',a)$ are unifiable by the substitution $\{X\leftarrow X',Y\leftarrow a\}$. Then, the binary resolvent $C$ of clashing clauses $C_1$ and $C_2$ is given by $$C=(C_1\setminus\{l_1\}\cup C_2\setminus\{l_2\})\sigma$$ where $\sigma$ is the most general unifier between the two clauses.
Resolution. Given a clausal form $\Sigma$, resolution takes two clashing clauses $C_1,C_2$ computes the binary resolvent $C$ and add it to the clausal form. We can easily prove that $\{C_1,C_2,C\}$ preserves the truth of $\Sigma$.
Example
Theorem. Binary resolution is sound but not complete for refutation.
Proof: Soundness can be trivially proved, while we can derive a counterexample for completeness. In fact, we can have unsatisfiable arguments which cannot be detected by resolution. Consider
In fact, we get only tautologies.
Question. How can we make it complete?
Factoring. For two or more literals in clause $C$ there is a most general unifier $\sigma$. Therefore, you can replace the clause with the factor $C\sigma$.
Theorem. Binary resolution + factoring is sound and complete for refutation.
Note that binary resolution + factoring may not terminate.
Example of non-termination for satisfiable argument (which we want to refute)
Example of non-termination for unsatisfiable argument (which we want to refute)
It can be constructed from previous case and imagine that resolution + factoring performs the same operations as before without touching clauses 3 and 4.
Improving Efficiency for Proving Conjectures in First-Order Logic
Here, we show some strategies used to improve the efficiency of deduction systems.
Strategy 1: Ordered Resolution Calculus
Our aim is to check the validity of a formula and in particular check that $\Sigma\models\phi$ for a set of premises $\Sigma$ and a conclusion $\phi$. Equivalently, we can check for unsatisfiability of the negated conjecture, namely $\Sigma\cup\neg\phi\models\perp$. Therefore, we can apply resolution refutation to the formula $\Sigma\cup\neg\phi$. Now, consider the example $\Sigma=\phi=\{\{\neg a\},\{\neg b\},\{\neg c\}\}$. Therefore, $\Sigma\cup\neg\phi=\{\{\neg a\},\{\neg b\},\{\neg c\},\{a,b,c\}\}$. In this case, there are different ways to select clauses for resolution and many of them are useless to arrive to a contradiction. Importantly, if the formula $\Sigma\cup\neg\phi$ is unsatisfiable then all literals belonging of $\neg\phi$ have to disappear. Based on this consideration, we can specify an order on literals to reduce the search space. Inn the previous example, we can decide to resolve for $a$, then for $b$ and then for $c$. The notion of ordering literals can be extended to first-order logic (soundness and completeness can be still guaranteed). Since it is difficult to know a priori what is the optimal ordering to achieve faster convergence to a possible contradiction, modern automated theorem provers based on resolution refutation typically choose a set of orderings and try all of them.
Strategy 2: Tautology Elimination
Resolution can produce clauses which are tautologies, like $\{P(f(a)),\neg P(f(a)), Q(Y)\}$. Therefore, at each inference step, we can eliminate possible tautologies. Note that $\{P(f(X)),\neg P(f(a)), Q(Y)\}$ is not a tautology.
Strategy 3: Subsumption Elimination
The idea behind subsumption elimination is to eliminate clauses which do not add additional information.
Consider for example two sentences Everyone tells the truth and Jack tells the truth. It's clear that the latter sentence provides no additional information to determine the truth value of the conjuction of the two sentences. Therefore, we can remove it.
Let's define subsumption
Subsumption. $\phi$ subsumes $\psi$ iff there exists a substitution $\sigma$ such that $\phi\sigma\subseteq\psi$. $\phi$ is the subsuming clause and $\psi$ is the subsumed clause.
We have two kinds of subsumption:
- Forward subsumption. If resolution generates clause $\psi$ from clauses $\phi_1$ and $\phi_2$ and $\phi_1$ subsumes $\psi$, then $\psi$ does not provide additional information. Therefore, the subsumed clause can be removed.
- Backward subsumption. If resolution generates clause $\psi$ from clauses $\phi_1$ and $\phi_2$ and $\psi$ subsumes an existing clause $\phi_3$, then we have to wait for $\phi_3$ being deleted before deleting $\psi$.
Extension of Proving Conjectures in First-Order Logic with Equality
Equality relation is fundamental to express many mathematical problems. For example, equality is necessary to describe systems of equations.
The equality symbol is defined by a set of axioms which need to be added to a clausal form, if we want to prove the validity of a formula. Typically, you need to convert the axioms into conjuctive normal form before proceeding to adding them to the set of clauses. The inclusion of axioms comes with the cost of adding extra complexity to the resolution refutation system (i.e. resolution needs to deal with more clauses).
Nevertheless, there is a set of inference rules, which are guaranteed to satisfy the equality axioms when applied to resolve/factor clauses with equalities without the need of specifying them. The system using these inference rules is called paramodulation calculus. (There is also the extension to ordered paramodulation calculus which is proved to be sound and complete). Check HERE + HERE at 1'12''00 for more details about paramodulation.
Nevertheless, ordered paramodulation calculus is still very inefficient due to the use of equalities. The number of clauses in this case still explodes. There are other inference rules based on the Knuth-Bendix completion algorithm (this algorithm was applied for the term rewriting problem where you have a system of equations in input and you extract a set of rules [which consists of ordered equalities] from it), which allow to deal with equalities in a more efficient way, leading to the so called superposition calculus.