Skip to content
Related Articles

Related Articles

Propositional Logic Reduction

Improve Article
Save Article
  • Last Updated : 28 Feb, 2022
Improve Article
Save Article

It is possible to reduce first-order inference to propositional inference once rules for inferring nonquantified sentences from quantified sentences are established. The first concept is that, just as one instantiation can replace an existentially quantified statement, the set of all potential instantiations can replace a universally quantified sentence. Consider the following scenario: our knowledge base only contains the lines 

\forall x \operatorname{King}(x) \wedge \operatorname{Greed} y(x) \Rightarrow \operatorname{Evil}(x)

\operatorname{King}(J o h n)

\operatorname{Greedy}(J o h n)

\operatorname{Brother}(Richard, J o h n)

Then, utilizing all feasible ground-term substitutions from the knowledge base vocabulary—in this case, \{x / J o h n\}   and \{x / \text { Richard }\}   — we apply UI to the first phrase. We get \begin{aligned} &\operatorname{King}(J o h n) \wedge \operatorname{Greedy}(J o h n) \Rightarrow \operatorname{Evil}(J o h n) \\ &\operatorname{King}(\text { Richard }) \wedge \text { Greedy }(\text { Richard }) \Rightarrow \text { Evil }(\text { Richard }) \end{aligned}

We don’t use the universally quantified sentence. If the ground atomic sentences — \operatorname{King}(J o h n), \text { Greedy }(J o h n)  , and so on—are viewed as proposition symbols, the knowledge base is now basically propositional. As a result, any of the comprehensive propositional procedures can be used to arrive at conclusions like \operatorname{Evil}(J o h n)  .

The propositionalization technique can be applied to any first-order knowledge base or query, preserving entailment. As a result, we have a comprehensive entailment decision method… or do we? There’s a catch: when a function symbol is included in the knowledge base, the number of possible ground-term substitutions is limitless! If the Father   symbol is mentioned in the knowledge base, for example, an unlimited number of nested terms can be created, such as \text { Father (Father (Father (John))) }   With an infinite number of sentences, our propositional algorithms will have problems.

Fortunately, Jacques Herbrand (1930) proved that if a statement is implied by the original, first-order knowledge base, then it can be proved using only a finite portion of the propositional knowledge base. We can discover the subset by producing all the instantiations with constant symbols (\text{Richard and John}  ), then all terms of depth 1 \text{(Father (Richard ) and Father (John ))}  , then all terms of depth 2, and so on, until we can produce a propositional proof of the entailed sentence.

We’ve sketched a full approach to first-order inference via propositionalization, which can prove any entailed phrase. Given the vast number of conceivable models, this is a significant accomplishment. However, we will not know whether the sentence is entailed until the proof is completed! What happens if the sentence isn’t followed by a clause? Is it possible to determine this? It turns out that we can’t for first-order logic. Our proving procedure can go on indefinitely, generating more and more deeply nested words, but we won’t know if it’s locked in an endless loop or if the proof is about to spring out. The halting problem for Turing machines is very similar to this. Alonzo Church (1936) and Alan Turing (1936) both demonstrated the inevitability of this situation in separate ways. For first-order logic, the question of entailment is semidecidable, meaning that there are algorithms that answer yes to every entailed statement but none that say no to every nonentailed sentence.


My Personal Notes arrow_drop_up
Related Articles

Start Your Coding Journey Now!