**Within Arithmetic Truth and Provability are ONE-AND-THE-SAME **

**Provable(“2 + 3 = 5”) True(“2 + 3 = 5”)**

**Refutable(“2 + 3 = 7”) False(“2 + 3 = 7”)**

There exists logic sentence LP such that LP is materially

equivalent to a proof of its own negation: **∃LP (LP ⊢ ¬LP)**

This truth table proves there is no Liar Paradox logic sentence

**∃LP (LP ⊢ ¬LP)
……….F…..F…..T……….// When LP ≡ “2 + 3 = 7” LP is False and Refutable
……….T…..F…..F……….// When LP ≡ “2 + 3 = 5” LP is True and **

**Wittgenstein’s minimal essence of Gödel’s 1931 Incompleteness sentence**

I imagine someone asking my advice; he says: “I have constructed a proposition (I will use ‘P’ to designate it) in Russell’s symbolism, and by means of certain definitions and transformations it can be so interpreted that it says ‘P is not provable in Russell’s system’. **∃P (P RS ⊬ P)**

**Adapted for the minimal essence of the 1931 Incompleteness Theorem**

There exists logic sentence G such that G is materially

equivalent to its own unprovability: **∃G (G ⊬ G)**

This truth table proves that logic sentence G does not exist

**∃G (G ⊬ G)
…….F….F….T……….// When G ≡ “2 + 3 = 7” G is False and ¬Provable
…….T….F….F……….// When G ≡ “2 + 3 = 5” G is True and ¬¬Provable**

The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, there are statements of the language of F which can neither be proved nor disproved in F.

Wittgenstein’s “notorious paragraph” about the Gödel Theorem

**Wittgenstein’s words:**

**// Wittgenstein’s minimal essence of the 1931 Incompleteness Theorem**

“I have constructed a proposition (I will use ‘P’ to designate it) in Russell’s symbolism, and by means of certain definitions and transformations it can be so interpreted that it says ‘P is not provable in Russell’s system’.

Formalized by Olcott: **P (RS ⊬ P)**

**// Wittgenstein definitions of True() and False()**

‘True in Russell’s system’ means, as was said: proved in Russell’s system; and ‘false in Russell’s system’ means: the opposite has been proved in Russell’s system.

Formalized by Olcott: (LHS := RHS means LHS is defined as the RHS)

**∀x (True(RS, x) := (RS ⊢ x))**

**∀x (False(RS, x) := (RS ⊢ ¬x))**

**My proof that Wittgenstein is correct**

**P (RS ⊬ P)** // Minimal Essence of the 1931 Incompleteness Theorem

The first two rows of the truth table are determined by the Wittgenstein definition of True and False specified by their formalized axiom schemata shown in Wittgenstein definitions of True() and False().

The failure of logical equivalence on the last two rows of the truth table shows that both P and ¬P are contradicted (false) (in the above formula) thus meeting the [epistemological antinomy] sufficiency condition that Gödel stipulated for proof equivalence:

“14 Every epistemological antinomy can likewise be used for a similar undecidability proof.” (Gödel 1931:40)

P ↔ (RS ⊢ P) T-T-----T-T // (P is true ↔ P is provable) is true F-T-----T-F // (P is false ↔ ¬P is provable) is true T-F-----F-T // (P is true ↔ P is ¬provable) is false F-F-----F-F // (P is false ↔ ¬P is ¬provable) is false

Gödel’s famous sentence is simply false in RS because it is (materially equivalent to self contradiction thus) semantically ill-formed in RS.

**We are now switching from Wittgenstein’s names: **

(1) Russell’s System (RS) becomes Peano Arithmetic (PA)

(2) Wittgenstein’s P becomes Gödel’s G

If G is not provable in PA then G is not true in PA. If G is provable in the Gödelization of PA then G is true in the Gödelization of PA. Diagonalization is an alternative form of provability.

The Gödelization of PA is a distinctly different formal system than PA as shown by the difference of the provability of G.

The key difference between PA and the Gödelization of PA is that G is inside the scope of self-contradiction in PA and G is outside the scope of self-contradiction in the Gödelization of PA.

When we address the comparable proof in the Tarski Undefinability Theorem we also address another aspect of the Incompleteness Theorem.

For formal systems that are not expressive enough to have their own provability operator as in Gödel’s PA and Tarski’s Theory there are indeed expression’s of PA and Tarski’s Theory that can be shown to be true in the Gödelization of PA and Tarski’s meta-theory respectively that are not provable (and therefore untrue) in PA and Tarski’s Theory.

Here is a key brand new insight anchored in the Tarski Undefinability Theorem. Tarski concluded that an infinite hierarchy of Meta-Theories was required to always have provability. This can be easily shown to be untrue.

We simply have two different versions of Tarski’s Theory and two different versions of his Meta-Theory having the exact same relations as the original Theory and Meta-Theory, yet all of these relations are differently named.

Meta-Theory-A is in terms of the relations of Theory-A.

Meta-Theory-B is in terms of the relations of Theory-B.

Now we have two Meta-Theories at the exact same hierarchy level that can each prove the unprovable expressions of the other. They can do this because these expressions are outside of the scope of self-contradiction.

Copyright 2019 Pete Olcott All rights reserved

]]>When true is defined as deductively sound conclusions:

(1) True is ALWAYS defined (thus not undefinable).

(2) Everything ELSE is defined as untrue

(eliminating undecidability and incompleteness).

**Here is how that is formalized:**

When we specify that True(x) is the consequences of the subset of the of conventional formal proofs of mathematical logic having true premises then True(x) is always defined and never undefinable.

The above specifies formal systems that are complete, consistent and True(x) is always defined. We consider axioms to be specified by a finite list of finite strings combined with a finite list of axiom schemata. Rules-of-inference are specified by another finite list.

The above specs make my whole point!

**Understanding this post requires total familiarity with formal proofs of mathematical logic as explained in the following two pages of Mendelson:**

http://liarparadox.org/Provable_Mendelson.pdf

**Introduction to Mathematical logic Sixth edition Elliott Mendelson (2015) **

Could the ideas of * formal proofs of mathematical logic* and

If the notion of True(x) is defined as provable from axioms and axioms are stipulated to be finite strings having the semantic property of Boolean true then every expression of language that not a theorem or an axiom is not true.

Because valid deduction from true premises necessarily derives a true consequence we know that the following predicate pair consistently decides every deductively sound argument. So we only need to exclude deductively unsound arguments to define a complete and consistent formal system.

// LHS := RHS the LHS is defined as an alias for the RHS

**∀x True(x) := ⊢x**

**∀x False(x) := ⊢¬x**

This additional axiom schema requires that all logic sentences be either true or false: **∀x Deductively_Sound_Consequence(x) := True(x) ∨ False(x)**

Every closed WFF not the consequence of a deductively sound argument is construed to be semantically incorrect and excluded from membership in the set of logic sentences. Formalized paradoxes and many undecidable decision problems fall into this category.

**Now we apply deductively sound formal proofs of mathematical logic to the 1931 Incompleteness Theorem**

https://plato.stanford.edu/entries/goedel-incompleteness

The first incompleteness theorem states that in any consistent formal

system F within which a certain amount of arithmetic can be carried

out, there are statements of the language of F which can neither be

proved nor disproved in F. (Raatikainen 2018)

**When we formalize the essence of above we get this logic sentence:**

∃F ∈ Formal_System ∃G ∈ Closed_WFF(F) (G ((F ⊬ G) ∧ (F ⊬ ¬G)))

We are assuming consistency without specifying it and we don’t limit the scope to arithmetic. If the above sentence is false then Gödel is wrong.

Within the sound deductive inference model the G of this expression:

((F ⊬ G) ∧ (F ⊬ ¬G)) is a closed WFF that is neither true or false, thus excluding it from membership in the set of logic sentences.

Within the sound deductive inference model the RHS portion of this

expression: G ((F ⊬ G) ∧ (F ⊬ ¬G)) cannot be evaluated for

material equivalence with G because it is not a logic sentence.

∴ There is no G that is materially equivalent to its own unprovability and its own irrefutability because this would require it to be materially equivalent to neither True or False.

]]>A connected set of known truths necessarily always derives truth.

**My basis for doing this is the sound deductive inference model with:**

*[a connected sequence of valid deductions from true premises to a true conclusion]*

**This equivalent end result is achieved in symbolic logic:**

*[a connected sequence of valid inference from axioms to a true consequence]*

By construing Axioms as expressions of language having the semantic property of Boolean true.

This last part embeds the semantic notion of Boolean values directly in the syntax of formal expressions. Rudolf Carnap proposed this same idea in his (1952) Meaning Postulates.

**Now we have: [Deductively Sound Formal Proofs] — True(x) (⊢x)**

Defining a specification of the notion of formal systems having a universal truth predicate without undecidability, incompleteness or inconsistency.

When one begins with truth and applies valid inference then one necessarily derives truth: (True(P) ⊢ True(C)).

**Both Tarski and Gödel “prove” provability can diverge from Truth. **

When we boil their claim down to its simplest possible essence it is really claiming that valid inference from true premises might not always derive a true conclusion. This is obviously impossible.

Within the sound deductive inference model there is a * (connected sequence of valid deductions from true premises to a true conclusion) * unlike the formal proofs of symbolic logic provability cannot diverge from truth.

When we consider sound deductive inference to the negation of a conclusion we now also have a definitive specification of falsity.

Within the sound deductive inference model we can be certain that valid inference from true premises derives a true conclusion.

∴ Within the sound deductive inference model any logic sentence that does not evaluate to True or False is unsound, there is no undecidability or incompleteness in this model.

The key criterion measure that the sound deductive inference model would add to the formal proofs to theorem consequences of symbolic logic would be the semantic notion of soundness.

**Formalizing Sound Deductive Inference in Symbolic Logic**

**Axiom(0) Stipulates this definition of Axiom**

An Axiom is any expressions of language defined to have the semantic value of Boolean True.

**Stipulating this specification of True and False:**

Axiom(1) ∀F ∈ Formal_System ∀x ∈ Closed_WFF(F) (True(F, x) (F ⊢ x))

Axiom(2) ∀F ∈ Formal_System ∀x ∈ Closed_WFF(F) (False(F, x) (F ⊢ ¬x))

**Stipulating that formal systems are Boolean:**

Axiom(3) ∀F ∈ Formal_System ∀x ∈ Closed_WFF(F) (True(F,x) ∨ False(F,x))

**Axiom(0)** provides the symbolic logic equivalent of true premises.

**Axiom(1)** and Axiom(2) stipulate consequences are provable from axioms.

**Axiom(3)** screens out unsound (semantically malformed) sentences as not belonging to the formal system.

**The above inference model directly refutes the third line of Tarski’s proof:**

http://liarparadox.org/Tarski_Proof_275_276.pdf

**(3) x ∉ Pr if and only if x ∈ Tr**

**~Provable(x) True(x)**

Because of the way that the Prolog inference model works the above expression would be false, and its negation: Provable(x) True(x) would be true. So the Prolog inference model already refutes Tarski’s line (3) which in turn causes the whole rest of Tarski’s proof to fail.

Very slight changes can be made to the way that formal systems are defined eliminating undecidability in all of these formal systems. The key change is that undecidable sentences are decided to have a truth value of ~True.

The remaining sentences are True based on their provability or False based on the provability of their negation. By provability it is meant that these expressions are theorems on the basis of their having a empty sets of premises.

Tarski “proved” that there cannot possibly be any correct formalization of the notion of truth entirely on the basis of an insufficiently expressive formal system that was incapable of recognizing and rejecting semantically incorrect expressions of language.

]]>(2) x ∈ Tr p // p True(x)

(3) x ∉ Pr x ∈ Tr // ~Provable(x) True(x)

By making x and p concrete such as:

x 3 > 5

p 3 < 5

(1) ~Provable(3 > 5) {3 < 5}

(2) True(3 > 5) {3 < 5}

(3) ~Provable(3 > 5) True(3 > 5)

There is no way to concretely define x and p such that

line (3) logically follows from lines (1) and Line (2)

**The 1936 Tarski Undefinability Proof**

http://liarparadox.org/Tarski_Proof_275_276.pdf

**Tarski Undefinability Theorem Succinctly Refuted**

http://liarparadox.org/Tarski_Undefinability_Refutation.pdf

Meaning_Postulates_Rudolf_Carnap_1952.pdf

http://liarparadox.org/Haskell_Curry_45.pdf

http://liarparadox.org/247_248.pdf

http://liarparadox.org/Wittgenstein.pdf

https://en.wikipedia.org/wiki/Theory_(mathematical_logic)

The construction of a theory begins by specifying a definite non-empty conceptual class E the elements of which are called statements. These initial statements are often called the primitive elements or elementary statements of the theory, to distinguish them from other statements which may be derived from them.

A theory T is a conceptual class consisting of certain of these elementary statements. The elementary statements which belong to T are called the elementary theorems of T and said to be true. In this way, a theory is a way of designating a subset of E which consists entirely of true statements. **(Haskell Curry, Foundations of Mathematical Logic, 2010). **

**This is the same page from his 1977 Book:**

http://liarparadox.org/Haskell_Curry_45.pdf

**Semantic meaning of English sentences specified in First Order Logic**

Meaning Postulates Rudolf_Carnap (1952)

(1) Bachelor(Jack) → ~Married(Jack)

(2) Black(Fido) ∨ ~Black(Fido)

**Expressing the (a) transitivity (b) irreflexivity and (c) asymmetry relations of the English word Warmer: **

(a) (x)(y)(z) (Warmer(x,y) ∧ Warmer(y,z) → Warmer(x,z))

(b) (x) ~Warmer(x,x)

(c) (x)(y) (Warmer(x,y) → ~Warmer(y,x)

Translated by B. MELTZER

Introduction by R. B. BRAITHWAITE

2 INTRODUCTION ]]>

The

**Here is the big mistake of the current mathematical understanding of provability that is derived from the principle of explosion: **

p = “I am going to the store”

q = “I am going to the store to buy groceries”

~p ⊢ q

The fact that I am not going to the store at all proves that I am going to the store to buy groceries.

Principle of explosion: ∀P∀Q: (P∧~P) ⊢ Q asserts that:

**It logically follows that there exists a provability relation between every pair of totally unrelated expressions of language.**

Since any expression of language can be conjoined to its negation thus forming the EFQ premise, therefore EFQ asserts that any other totally unrelated expression of language is proven on the basis of this totally unrelated premise.

Validity and Soundness https://www.iep.utm.edu/val-snd/

A deductive argument is said to be valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Otherwise, a deductive argument is said to be invalid.

The above definition of valid deduction (logical consequence) is insufficient in that it does not require a semantic connection between the premises and the conclusion. I show how this semantic connection can be formalized syntactically below.

A wf C is said to be a consequence in S of a set Γ of wfs if and only if there is a sequence B1, …, Bk of wfs such that C is Bk and, for each i, either Bi is an axiom or Bi is in Γ, or Bi is a direct consequence by some rule of inference of some of the preceding wfs in the sequence. Such a sequence is called a proof (or deduction) of C from Γ. The members of Γ are called the hypotheses or premisses of the proof. We use Γ ⊢ C as an abbreviation for “C is a consequence of Γ”… **(Mendelson 2015:28)**

**Semantic connection between the premises and the conclusion**

This is formalized by requiring that all of the premises of the proof be included in the inference chain from the LHS of “⊢” to the RHS consequence, extraneous premises are not allowed. A formal proof can be constructed (all the inference steps shown) from the premises to the consequence with every premises required in at least one inference step.

**Semantically relevant premises:**

{“5 > 4”, “4 > 3”} ⊢ “5 > 3”

**Specifying these semantically relevant inference steps:**

“5 > 4”

“4 > 3”

———-

∴ “5 > 3”

The above makes this evaluate to false: **∴ ∃P∃Q (P∧~P) ⊢ Q. **

Thus certainly refuting this: **∀P∀Q: (P∧~P) ⊢ Q** (Principle of explosion)

**Mendelson, Elliott 2015**

Introduction to Mathematical logic Sixth edition

1.4 An Axiom System for the Propositional Calculus page 28

∀x False(x) ⊢~x // universal False predicate

∀x ~True(x) ~⊢x // universal ~True predicate

∀x ~False(x) ~⊢~x // universal ~False predicate

∀x ((~True(x) ∧ ~False(x)) ~Logic_Sentence(x))

**The proof that the Olcott universal Truth predicate is correct is twofold:**

(1) It always corresponds to the intuitive notation of True(x)

(2) Applied to logic and math it makes them both consistent and complete.

We don’t need to specify any particular formal system, we are examining this at the level of the formal language, which only requires the conventional semantics associated with the logical operators.

**“This sentence is not true.” — Formalized as: LP ~⊢LP**

**English:** A sentence of formal language is materially equivalent to a statement of its own unprovability.

LP ~⊢LP // This sentence is not provable

If ⊢LP this contradicts its assertion: (~⊢LP) ∴ ~True(LP)

If ⊢~LP this contradicts its assertion: ~(~⊢LP) ∴ ~False(LP)

∀x ((~True(x) ∧ ~False(x)) ~Logic_Sentence(x))

**∴ ~Logic_Sentence(LP)**

**“This sentence is false.” — Formalized as: LP ⊢~LP**

**English:** A sentence of formal language is materially equivalent to a statement of its own refutation.

LP ⊢~LP // This sentence is false

If ⊢LP this contradicts its assertion: (⊢~LP) ∴ ~True(LP)

If ⊢~LP this contradicts its assertion: ~(⊢~LP) ∴ ~False(LP)

∀x ((~True(x) ∧ ~False(x)) ~Logic_Sentence(x))

**∴ ~Logic_Sentence(LP)**

**Gödel’s 1931 Incompleteness sentence: G ~Provable(x)**

**English:** A sentence of formal language is materially equivalent to a statement of its own unprovability.

G ~⊢G // This sentence is not provable

If ⊢G this contradicts its assertion: (~⊢G) ∴ ~True(G)

If ⊢~G this contradicts its assertion: ~(~⊢G) ∴ ~False(G)

∀x ((~True(x) ∧ ~False(x)) ~Logic_Sentence(x))

**∴ ~Logic_Sentence(G)**

**Tarski’s 1936 Undefinability sentence: ~Provable(x) True(x)**

**English:** The unprovability of a sentence of formal language is materially equivalent to a statement of its own Truth.

Assuming that the Olcott universal Truth predicate True(x) Provable(x)

is correct the Tarski sentence ~Provable(x) True(x) is rejected.

**<Kurt Gödel>**

**ON FORMALLY UNDECIDABLE PROPOSITIONS OF PRINCIPIA
MATHEMATICA AND RELATED SYSTEMS I by Kurt Gödel Vienna**

(pages 40-41)

The analogy between this result and Richard’s antinomy leaps to the eye;

there is also a close relationship with the “liar” antinomy,14 since

the undecidable proposition [R(q); q] states precisely that q belongs

to K, i.e. according to (1), that [R(q); q] is not provable. We are

therefore confronted with a proposition which asserts its own

unprovability.

14 Every epistemological antinomy can likewise be used for a similar

undecidability proof.

**</Kurt Gödel>**

**Reference to (14)** Thus showing the error that causes the undecidability of the Liar Paradox would equally apply this same error to the 1931 Incompleteness Theorem.

**∴ Logic_Sentence(G ~Provable(G)) == False**

**Decomposition of a proposition into its two distinct Semantic properties:**

(1) Its Assertion. // What it is claiming to be true.

(2) The Satisfiability or Falsifiability of its Assertion.

Theories that characterize or define a relativized concept of truth

(truth in a model, truth in an interpretation, valuation, or possible

world) set out from the start in a direction different from that proposed

by Convention T. Because they substitute a relational concept for the

single-place truth predicate of T-sentences, such theories cannot carry

through the last step of the recursion on truth or satisfaction which is

essential to the quotation-lifting feature of T-sentences.

(Davidson, Donald 1973: 68)

Davidson, Donald 2001

Inquiries into Truth and Interpretation (Second Edition)

CLARENDON PRESS · OXFORD

Essay 5. In Defence of Convention T (1973)

**Copyright 2018 Pete Olcott**

(error now unequivocally shown)"]]>

Formalized mathematically as this predicate logic statement:

not_true(X) :- X -> false; true.

materially_equivalent(X, Y) :- (X, Y); (not(X), not(Y)).

**Liar Paradox formalized as Prolog:**

materially_equivalent(not_true(LP), LP).

**This fails indicating an infinitely recursive structure:**

?- unify_with_occurs_check(not_true(LP), LP).

**Therefore this predicate cannot be evaluated:**

?- materially_equivalent(not_true(LP), LP).

Thus showing the precise nature of the unequivocal error of the Liar Paradox and related self-referential undecidable decision problems.

http://mathworld.wolfram.com/Undecidable.html

Not decidable as a result of being neither formally provable nor unprovable.

**Pinnacle of my work (over 12,675 USENET messages) since 1998:
Tarski’s formal correctness formula: ∀x True(x) φ(x)
is completed by this RHS: ∀x True(x) ⊢x**

**That tiny little formula provides the basis for correctly refuting:**

(1) The Liar Paradox

(2) The essence of the 1931 Incompleteness Theorem

(3) The 1936 Tarski Undefinability Theorem

(4) The 1936 Alan Turing Halting Problem

**All four have (previously undiscovered) infinitely recursive structure.**

Copyright 2018 (and many other years since 1998) Pete Olcott

]]>decidability decider would be defined.

A wf C is said to be a consequence in S of a set Γ of wfs if and only if

there is a sequence B1, …, Bk of wfs such that C is Bk and, for each i,

either Bi is an axiom or Bi is in Γ, or Bi is a direct consequence by some

rule of inference of some of the preceding wfs in the sequence. Such a

sequence is called a proof (or deduction) of C from Γ. The members of Γ

are called the hypotheses or premisses of the proof. We use Γ ⊢ C as an

abbreviation for “C is a consequence of Γ”

…

If Γ is the empty set ∅, then ∅ ⊢ C if and only if C is a theorem. It

is customary to omit the sign “∅” and simply write ⊢ C. Thus, ⊢ C is

another way of asserting that C is a theorem. (Mendelson 2015: 28)

Formalism (philosophy of mathematics)

In foundations of mathematics, philosophy of mathematics, and philosophy of logic, formalism is a theory that holds that statements of mathematics and logic can be considered to be statements about the consequences of certain string manipulation rules.

Thus a formalist proof can be considered as finite string transformation rules (in the language of predicate logic) applied to finite strings representing:

(1) A possibly empty set of premises,

(2) A set of axioms,

(3) A set of rules of inference.

deriving a finite string consequence.

Sentence (mathematical logic)

In mathematical logic, a sentence of a predicate logic is a boolean-valued

well-formed formula with no free variables. A sentence can be viewed as

expressing a proposition, something that must be true or false.

**The above provides the conventional basis for the following analysis:**

The key conventional understanding that I have overcome with the following new insights is that some WFF expressions of formal language F may be erroneous in ways that were never previously understood to be erroneous.

The key brand new insight that I bring to the Halting Problem and other undecidable self-referential expressions of formal language is this:

(∀x ∈ Language(F) ( Satisfiable(x) ∨ Falsifiable(x) ∨ ~Sentence(x) ))

If any expression of language does not evaluate to {True, False} then this expression of language is not a sentence of that formal language even if it is a closed WFF of the language.

**Then we apply this key new insight to the Halting Problem where: **

A halt decider H is construed as deriving a formal proof in the language of Turing machine descriptions. The inference steps of this formal proof would be construed as a hypothetical execution trace of the input Turing machine description on its input finite string.

**Defining a Halting Decidability Decider**

No finite sequence of steps satisfies Halts(Ĥ, Ĥ) or satisfies Loops(Ĥ, Ĥ) where the term {satisfies} and the predicates Halts() and Loops() are mathematical. The predicates Halts() and Loops() correspond predicate logic True and False, thus the unsatisfiability of Halts() and Loops() indicates that Ĥ does not specify a logic sentence.

**Copyright 2018 Pete Olcott**

Mendelson, Elliott 2015.

Introduction to Mathematical logic Sixth edition.

1.4 An Axiom System for the Propositional Calculus: 28 CRC Press

1.4 An Axiom System for the Propositional Calculus page 28

If Γ is the empty set ∅, then ∅ ⊢ C if and only if C is a theorem. It is customary to omit the sign “∅” and simply write ⊢ C. Thus, ⊢ C is another way of asserting that C is a theorem.

]]>∀F((F ∈ Formal_Systems & Q ⊆ F & Consistent(F)) →

∃G ∈ L(F) (G ~(F ⊢ G)))

**English meaning of the above logic sentence:**

For all consistent formal systems that are a superset of Robinson Arithmetic

there is a sentence of the language of this formal system that is materially

equivalent to a statement of its own unprovability.

**Its rebuttal would be formalized as this:
**∀F ∈ Formal_Systems (~∃G ∈ Language(F) (G ~(F ⊢ G)))

**English meaning of the above logic sentence:**

For all formal systems there is no sentence of the language of this formal system that is materially equivalent to a statement of its own unprovability.

If G was Provable in F this contradicts its assertion: G is not Provable in F.

If ~G was Provable in F this contradicts its assertion: G is Provable in F.

Since G is neither Provable nor Refutable in F it forms a Gödel sentence in F.

∴ There does not exist any sentence of any formal language that is materially equivalent to a statement of its own unprovability.

Kurt Gödel, Allan Turing and Alfred Tarski misinterpreted these erroneous expressions of language proving undecidability as fundamental limitations of Math, Computer Science and Logic.

http://mathworld.wolfram.com/Undecidable.html

Not decidable as a result of being neither formally provable nor unprovable.

Copyright 2018 Pete Olcott

]]>“For every Formal_System that is a super set of Robinson_Arithmetic there exists a proposition that is materially equivalent to a statement of its own unprovability.”

**∀F ∈ Formal_Systems F ⊇ Robinson_Arithmetic (∃G ∈ F (G ~(F ⊢ G)))
**

If every self-referential Turing machine description Halting Problem counter-example could be recognized and rejected such that halting decidability would be decidable (in these cases) would this refute Rice’s Theorem?

**Halting decidability decider could possibly make halting always decidable:**

A Turing machine H halt decider attempts to build a formal mathematical proof from its inputs to its own final states of H.halts and H.loops. If it finds that no such proof exists, then it has decided halting undecidability and rejects its inputs as erroneous, otherwise one of the two proofs that it built has decided halting on these inputs.

An Introduction to Formal Languages and Automata (1990) by Peter Linz Pages 318-319 show the definition of the two Turing machines that are being analyzed: The top of page 320 shows that Ĥ is being used self-referentially.

Halting Problem Proof from Finite Strings to Final States, (2018) by Pete Olcott This paper begins its first two pages with a slight adaptation of the notational conventions of Linz. It labels the second q0 state in Ĥ as Ĥ.qx. It includes non final state of Ĥ.qy. It numbers each copy of the input Turing machine descriptions. These adaptations to the Linz notation make it possible to perform a hypothetical execution trace of H(Ĥ, Ĥ).

This execution trace shows that no formal proof exists from H (Ĥ, Ĥ) to final states H.halts or H.loops. No proof exists because the input pair (Ĥ, Ĥ) specifies infinite recursion long before it reaches its paradoxical states of (qy) or ((qn)).

**The Refutation of Peter Linz Halting Problem Proof is complete. **

This refutation applies to the other conventional (self-referential) Halting

Problem proofs. This algorithm was completed December 13th 2018 7:00PM

**Peter Linz Halting Problem Proof (1993) with Turing Machines H and Ĥ.**

http://liarparadox.org/Peter_Linz_HP(Pages_318-319).pdf

Every detail of the encoding of virtual machines implementing the Peter

Linz H deciding halting for the Peter Linz input pair: (Ĥ, Ĥ) is complete.

The only step remaining is the C++ encoding of the UTM that executes

these virtual machine descriptions.

When this last step is complete I will provide the full execution trace

of H actually deciding halting for input pair: (Ĥ, Ĥ).

The only reason that this is possible is a key undiscovered detail that

no one noticed for more than eight decades.

Copyright 2018 Pete Olcott

]]>Tarski Undefinability Proof (pages 247-248)

**Tarski’s Undefinability Theorem proof: **

Tarski Undefinability Proof (pages 275-276)

Direct quotes from page 275-276 **// indicates paraphrase**

the symbol ‘Pr’ which denotes the class of all provable sentences of the theory under consideration

…

We can construct a sentence x … which satisfies the following condition:

It is not true that x ∈ Pr if and only if p

…

(1) x ∉ Pr p **// p ~Provable(x)**

Where the symbol ‘p’ represents the whole sentence x.

…

We shall now show that the sentence x is actually undecidable and at the same time true.

…

If we denote the class of all true sentences by the symbol ‘Tr’ then – in accordance with convention T – the sentence x which we have constructed will satisfy the following condition:

(2) x ∈ Tr p **// p True(x)**

From (1) and (2) we obtain immediately:

(3) x ∉ Pr x ∈ Tr **// ~Provable(x) True(x)**

We can derive the following theorems from the definition of truth

(4) either x ∉ Tr or ~x ∉ Tr **// ~True(x) ∨ ~True(~x)**

(5) if x ∈ Pr, then x ∈ Tr **// Provable(x) → True(x)**

(6) if ~x ∈ Pr, then ~x ∈ Tr **// Provable(~x) → True(~x)**

From (3) and (5) we infer without difficulty that

(7) x ∈ Tr **// True(x)**

and that

(8) x ∉ Pr **// ~Provable(x)**

In view of (4) and (7) we have

(8a) ~x ∉ Tr, **// ~True(~x)**

which together with (6) gives us the formula:

(9) ~x ∉ Pr **// ~Provable(~x)**

The formulas (8) and (9) together express the fact that x is an undecidable sentence; moreover from (7) it follows that x is a true sentence.

Copyright 1936 Alfred Tarski

Paraphrase Simplifications Copyright 2018 Pete Olcott

The Concept of Truth in Formalized Languages, Alfred Tarski 1936

]]>“there exists a proposition that is materially equivalent to a statement of its own unprovability.”

If G was Provable this contradicts its assertion: G is not Provable.

If ~G was Provable this contradicts its assertion: G is Provable

Therefore G is neither Provable nor Refutable and does not exist.

2019-07-16 update

If G was false then it would be ~Provable (F T) is false

If G was true then it would be ~~Provable (T F) is false

∴ ~∃G (G ~Provable(G))

Copyright 2018 Pete Olcott

]]>[a priori] Truth"]]>

For any natural (human) or formal (mathematical) language L we know that an expression X of language L is true if and only if there are expressions Γ of language L that connect X to known facts.

The * only way* that you know that a

**BaseFact** is an expression X of (natural or formal) language L that has been assigned the semantic property of True by making it a member of the collection named: BaseFacts. (Similar to a math Axiom).

(1) BaseFacts that contradict other BaseFacts are prohibited.

(2) BaseFacts must specify Relations between Things.

The above is the complete specification for a BaseFact.

To verify that an expression X of language L is True or False only requires a syntactic logical consequence inference chain (formal proof) from one or more BaseFacts to X or ~X. (Backward chaining reverses this order).

True(L, X) ∃Γ ⊆ BaseFacts(L) (Γ ⊢ X)

False(L, X) ∃Γ ⊆ BaseFacts(L) (Γ ⊢ ~X)

**Sentence (mathematical logic)**

In mathematical logic, a sentence of a predicate logic is a boolean-valued well-formed formula with no free variables. A sentence can be viewed as expressing a proposition, **something that must be true or false. The restriction of having no free variables is needed to make sure that sentences can have concrete, fixed truth values: **As the free variables of a (general) formula can range over several values, the truth value of such a formula may vary.

**Defining a Generic Decidability Decider:**

∀L ∈ Formal_Systems

∀X ∈ Closed-WFF(L)

**~True(L, X) ∧ ~False(L, X) → Incorrect(L, X)**

A language L is a set of finite strings of characters from a

defined alphabet specifying relations to other finite strings.

These finite strings could be tokenized as single integer values.

A Relation is identical to the common notion of a Predicate

from Predicate Logic, essentially a Boolean valued function.

Finite string Expression X expresses relation R of language L.

True(L, X) ∃Γ ⊆ BaseFacts(L) (Γ ⊢ X)

01 There_Exists (2)(6)

02 Subset_Of (3)(4)

03 Γ

04 BaseFacts (5)

05 L

06 Provable (3)(7)

07 X

Copyright 2018 (and many other years since 1997) Pete Olcott

]]>This complexity comes from the fact that the GIT must construct the equivalent of a HOL provability predicate within the language of arithmetic.

Doing this apparently requires 46 definitions, Gödel numbers and the Diagonal lemma. If we were dealing with HOL that has its own provability predicate none of this extraneous complexity would be required.

**Stanford Encyclopedia of Philosophy Gödel’s Incompleteness Theorems **

**2.5 The First Incompleteness Theorem—Proof Completed**

**(G) F ⊢ G**_{F}** ¬Prov**_{F}**(⌈G**_{F}**⌉).
**

**My adaptation:**

**G ~∃Γ ⊆ WFF(F) Provable(Γ, G)**** // No Gödel numbers required
**

An **BaseFact** is an expression X of (natural or formal) language L that has been assigned the semantic property of True. (Similar to a math Axiom).

A **Collection** T of **BaseFacts** of language L forms the ultimate foundation of the notion of Truth in language L.

To verify that an expression X of language L is True or False only requires a syntactic logical consequence inference chain (formal proof) from one or more elements of T to X or ~X.

True(L, X) ∃Γ ⊆ BaseFact(L) Provable(Γ, X)

False(L, X) ∃Γ ⊆ BaseFact(L) Provable(Γ, ~X)

**Since an expression X of language L is not a statement of language L unless and until it is proven to be True or False in L, every statement of language L can be proved or disproved in L.**

**∀L ∈ Formal_Systems
∀X ∈ L
Statement(L, X) → ( Provable(L, X) ∨ Refutable(L, X) )**

**Stanford Encyclopedia of Philosophy Gödel’s Incompleteness Theorems**

The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, **there are statements of the language of F which can neither be proved nor disproved in F.**

Formalized as: **Statement(F, G) ∧ ~Provable(F, G) ∧ ~Refutable(F, G)**

“This sentence is not true”. // is not true

// HOL with self-reference semantics

**LP ≡ ~∃Γ ⊆ BaseFact(L) Provable(Γ, LP)**

LP is rejected as semantically incorrect on the basis that no formal proof exists from one or more elements of T to LP or ~LP.

Because LP is neither True nor False LP is semantically incorrect.

“This sentence is not provable”. // is not provable

// HOL with self-reference semantics

**G ≡ ~∃Γ Provable(Γ, G)**

G is rejected as semantically incorrect on the basis that no formal proof exists from one or more elements of T to G or ~G.

**Copyright 2017, 2018 Pete Olcott **

A BaseFact is an expression of language defined to be True.

To prove that an expression of language is an Analytical Fact we confirm that there is a correct line-of-reasoning from one or more BaseFacts that make this expression necessarily True.

Since every Statement of language must be True or False and this is verified by proving that the statement is a Analytical Fact or contradicts an Analytical Fact all Statements of language can always be proved or disproved.

** Analytical means: a priori

Copyright Pete Olcott 2018

]]>

To prove that an expression of language is an Analytical Fact we confirm that there is a correct line-of-reasoning from other Analytical Facts that make this expression necessarily True.

False simply means contradicting an Analytic Fact. An Statement of language must be True or False or it is not a correct Statement.

Since every Analytic Statement of language must be True or False and this is verified by proving that the statement is an Analytic Fact or contradicts an Analytic Fact all Statements of language can always be proved or disproved.

**Here is the plain English version of the proof that I am refuting: ****

“The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, **there are statements of the language of F which can neither be proved nor disproved in F**.”

**Stanford Encyclopedia of Philosophy Gödel’s Incompleteness Theorems.

All material besides the SEP quote is Copyright 2017, 2018 Pete Olcott

]]>Analytic Facts(English) are

An analytic Fact is an expression of language that is completely verified as True entirely based on the meaning of its words (or symbols).

Some expressions of language are defined to be necessarily True. This forms the foundation of all Truth. To keep things simple we will call these expressions VERIFIED_FACTS.

To provide mathematical rigor to the definition of VERIFIED_FACTS we apply the David Hilbert formalist approach and specify a set of finite strings that are defined to have the semantic property of: True.

**True(L, Y, X)** means that there is an inference chain from a set Y of one or more VERIFIED_FACTS of L that derive X.

**False(L, Y, X)** means that there is an inference chain from a set Y of one or more VERIFIED_FACTS of L that derive ~X.

**Provable(L, Y, X** means that there is an inference chain from a set Y of one or more expressions of L that derive X.

**Refutable(L, Y, X)** means that there is an inference chain from a set Y of one or more expressions of L that derive ~X.

**Statement(L, Y, X) ( True(L, Y, X) ∨ False(L, Y, X) ) ∴
~◇( Statement(L, Y, X) ∧ ~Provable(L, Y, X) ∧ ~Refutable(L, Y, X) )**

This is a refinement to the 1997 Mathematical Mapping Theory of Truth

Copyright 1997 2004, 2015, 2016, 2017, 2018 Pete Olcott

]]>“The first incompleteness theorem states that in any consistent

formal system F within which a certain amount of arithmetic can

be carried out, there are statements of the language of F which can

neither be proved nor disproved in F.”

**Formalized as: Sentence _{F}(G) ∧ ~Provable_{F}(G) ∧ ~Refutable_{F}(G)**

**Any expression G of language F is not a sentence of ****language F **

**if this expression G of language F is not ****provable or refutable in F. **

**(see definitions below)**

∀FS ∈ Formal_Systems:

… Provable(FS, X) ∃Γ ⊆ Rules(FS) (Γ ⊢ X)

… Refutable(FS, X) ∃Γ ⊆ Rules(FS) (Γ ⊢ ~X)

… True(FS, X) ∃Γ ⊆ Axioms(FS) (Γ ⊢ X)

… False(FS, X) ∃Γ ⊆ Axioms(FS) (Γ ⊢ ~X)

… Sentence(FS, X) ( True(FS, X) ∨ False(FS, X) )

Axioms ⊆ Rules ⊆ WFF ∴

True(FS, X) ⊢ Provable(FS, X)

False(FS, X) ⊢ Refutable(FS, X)

**∴ ( ~Provable(F, G) ∧ ~Refutable(F, G) ) → ~Sentence(F, G)**

**copyright 2017, 2018 Pete Olcott**

∃a ∈ Turing_Machines_Descriptions

∀b ∈ Finite_Strings // some of these specify Turing Machine Descriptions

∀c ∈ Finite_Strings

Halts(a, b, c) ∨ ~Halts(a, b, c)

**Halt Decider definition**

A Halt Decider (HD) determines whether or not a finite string represents a correct Turing Machine Description that would halt on its input. It does this by performing a mathematical proof on a pair of finite strings as a David Hilbert formalism in the language of Turing Machine Descriptions.

**The syntactic logical consequence of this proof is two finite strings:**

(1) “Y” input pair (b,c) is a correct TMD that would halt on its input.

(2) “N” indicating the else branch of (1).

If the first element of the input finite string pair is a correct TMD then this proof must necessarily proceed from the specified TMD start state through all of the state transitions of this TMD to the halting or non halting behavior of this TMD.

The proof would essentially be a hypothetical execution trace** of the state transition sequences of the input TMD. It cannot be an actual execution trace or the halt decider could have non-halting behavior.

** Like step-by-step mode in a debugger.

http://liarparadox.org/HP_Infinite_Recursion.pdf

The following has been adapted from material from the above link:

We begin our analysis by constructing a hypothetical halt decider: H.

**Figure 12.1 Turing Machine H**

The dashed lines proceeding from state (q0) are represented in the text definition as the asterisk ⊢* wildcard character. These conventions are used to encode unspecified state transition sequences.

**Definition of Turing Machine H** (state transition sequence)

**H.q0 Wm W ⊢* H.qy** // Wm W corresponds to (b,c) shown above

**H.q0 Wm W ⊢* H.qn** // Wm W corresponds to (b,c) shown above

The diagram and the state transition sequence indicate that H begins at its own start state H.q0 and is applied to finite string pair (Wm, W).

H.qy is the final state of H indicating that Wm is a correct TM that would halt on input W. **Syntactic logical consequence : “Y” (shown above).**

H.qn is the final state of H indicating that Wm is not a correct TM that would halt on input W. **Syntactic logical consequence: “N” (shown above).**

**We create Turing Machine Ĥx by making the following changes to H:**

(1) Ĥx copies its input Wm a its Ĥx.q0 state and transitions to its Ĥx.qx state.

(2) Ĥx would begin to evaluate Wm Wm at its Ĥx.qx state in exactly the same way that H would begin to evaluate its Wm W input at its H.q0 state.

Since Turing Machine Ĥx is created by adapting H, it would have exactly the same behavior at its Ĥx.qx state as H would have at its H.q0 state.

**Figure 12.4 Turing Machine Ĥx**

**Definition of Turing Machine Ĥx **(state transition sequence)

**Ĥx.****q0 Wm ⊢* Ĥx.qx Wm Wm ⊢* Ĥx.qy **// NO LOOP

**Ĥx.****q0 Wm ⊢* Ĥx.qx Wm Wm ⊢* Ĥx.qn ** // NO LOOP HERE EITHER

**If Turing Machine H is applied to Turing Machine description [Ĥx] [Ĥx] **

** would it transition to H.y or H.n ? **

**We append a “2” to the input to [Ĥx] for clarity [Ĥx2] is a copy of [Ĥx].**

**To answer this question we perform an execution trace on H**

(1) *H begins at its start state H.q0.*

(2) *H begins to evaluate what [Ĥx] would do on its input [Ĥx2]*.

**Hypothetical Execution trace of what [Ĥ] would do if it was executed**

(3) *[Ĥx] would begin at its start state [Ĥx].q0*

(4) *[Ĥx] would make a copy of its input [Ĥx2]*, we will call this [Ĥx3].

(5) *[Ĥx] would transition to its state [Ĥx].qx.*

(6) *[Ĥx] would begin to evaluate what [Ĥx2] would do on its input [Ĥx3].*

**Can you see the infinite recursion?**

H [Ĥx] [Ĥx] specifies an infinitely recursive evaluation sequence.

Every HP proof by contradiction depends this same infinite recursion.

Because of this every TMD in the infinitely recursive sequence is defined in terms of H each would reject the whole sequence as semantically incorrect **before even beginning any halting evaluation.**

Therefore H would correctly transition to its H.qn state rejecting the infinitely recursive evaluation sequence specified by [Ĥx] [Ĥx] as the semantic error of Pathological self-reference(Olcott 2004).

**Figure 12.3 Turing Machine Ĥ**

**Definition of Turing Machine Ĥ **(state transition sequence)

** Ĥ.q0 Wm ⊢* Ĥ.q Wm Wm ⊢* Ĥ.qy ∞**

** Ĥ.q0 Wm ⊢* Ĥ.q Wm Wm ⊢* Ĥ.qn**

Copyright 2004, 2015, 2016, 2017, 2018 Pete Olcott

]]>An axiom is a proposition regarded as self-evidently true without proof.

Wolfram Mathworld: Theorem

A theorem is a statement that can be demonstrated to be true by accepted mathematical operations and arguments.

I define an **{a priori fact}** as an expression X of language L that has been assigned the semantic property of True.

Copyright 2018 Pete Olcott

]]>These conventional terms of the art are defined in terms of other conventional terms of the art and these terms also have fundamental misconceptions built in to their definitions, on and on.

My system analyzes specific instances the generic notion of finite strings of characters to determine if these finite strings correspond to the generic notion of {**Truth**}.

The generic notion of {**formal provability**} simply examines whether or not a specific preexisting set of finite string transformation rules allow one finite string to be derived from another. In the case where the first finite string has the Boolean property of True, then the derived finite string is also known to have this same Boolean property.

The set of finite string transformation rules includes a set of finite strings that have been defined to have the Boolean property of True. This corresponds to notion of Prolog Facts. From the Fact that “a can is made of tin” we can derive the fact that “a can is made of metal”.

The above systematic reconstruction of the notions of {**Truth**} and {**formal provability**} are applied to the following of specific concrete examples. The essence of these examples are reformalized and analyzed for logical coherence within a new formal system.

The Liar Paradox from antiquity

David Hilbert’s 1928 Entscheidungsproblem

Kurt Gödel’s 1931 incompleteness Theorem

Alfred Tarski’s 1936 Undefinability Theorem

Alan Turing’s 1937 Halting Problem

“The first incompleteness theorem states that in any consistent

formal system F within which a certain amount of arithmetic can

be carried out, there are statements of the language of F which can

neither be proved nor disproved in F.”

**Formalized as: Sentence _{F}(G) ∧ ~Provable_{F}(G) ∧ ~Refutable_{F}(G)**

**Any expression G of language F is not a sentence of ****language F **

**if this expression G of language F is not ****provable or refutable in F. **

**(see definitions below)**

∀FS ∈ Formal_Systems:

… Provable(FS, X) ∃Γ ⊆ Rules(FS) (Γ ⊢ X)

… Refutable(FS, X) ∃Γ ⊆ Rules(FS) (Γ ⊢ ~X)

… True(FS, X) ∃Γ ⊆ Axioms(FS) (Γ ⊢ X)

… False(FS, X) ∃Γ ⊆ Axioms(FS) (Γ ⊢ ~X)

… Sentence(FS, X) ( True(FS, X) ∨ False(FS, X) )

Axioms ⊆ Rules ⊆ WFF ∴

True(FS, X) ⊢ Provable(FS, X)

False(FS, X) ⊢ Refutable(FS, X)

**∴ ( ~Provable(F, G) ∧ ~Refutable(F, G) ) → ~Sentence(F, G)**

**copyright 2017, 2018 Pete Olcott**

(1) X

(2) Y

(3) (X ∧ Y) → Z

———–

(4) Z

**Here is what the Propositional Logic proof means:**

(1) X is True

(2) Y is True

(3) If (X and Y) are True then Z is True

(4) Therefore Z is True

]]>

Aristotle Syllogism (slightly enhanced):

…..Premise (a) “It is raining outside.”

…..Premise (b) “You go outside unprotected from the rain.”

Conclusion (c) “You get wet.”

Partial translation to Propositional Logic:

if (a) and (b) then (c)

Full translation to Propositional Logic:

(a ∧ b) → c

Propositional Logic Symbols and their English meanings

P …………. P is true

~P ……….. P is not true

P ∧ P ……. P and Q are both true

P ∨ Q ……. either P or Q (or both) are true

P → Q ….. if P is true then Q is true

…………….. if Q is false then P is false

…………….. if P is false then (Q ∨ ~Q)

If you understand the above then you know your P’s and Q’s

Syntactic versus Semantic Logical Consequence

Meaning postulates specify semantic logical entailment syntactically.

The best two examples from his paper: Bachelor(x) and Warmer(x,y):

Bachelor(x) → ~Married(x)

For example, let ‘W’ be a primitive predicate designating the relation Warmer. Then ‘W’ is transitive, irreflexive, and hence asymmetric in virtue of its meaning.

In the previous example of the predicate ‘W’, we could lay down the following postulates (a) for transitivity and (b) for irreflexivity; then the statement (c) of asymmetry:

(a) ∀(x,y,z) Warmer(x,y) ∧ Warmer(y,z) → Warmer(x,z)

(b) ∀(x) ~Warmer(x,x)

(c) ∀(x,y) Warmer(x,y) → ~( Warmer(y,x) )

Meaning Postulates Rudolf Carnap (1952)

Copyright 2017 Pete Olcott

]]>∀

Syntactic Consequence (Provability)

A formula ** A** is a syntactic consequence within some formal system

of a set

The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, there are

https://en.wikipedia.org/wiki/Sentence_(mathematical_logic)

In mathematical logic, **a sentence of a predicate logic**** is a boolean-valued well-formed formula with no free variables**. A sentence can be viewed as expressing a proposition, **something that must be true or false**.

**G @ ∃X ~Provable(X, G)**** // Minimal Type Theory**

(1) The above * expression* says that it is not

(2) The above

(3) However the above

**Expression translated into a Directed Acyclic Graph by the MTT compiler**

[01] G (02)(04)

[02] THERE_EXISTS (03)

[03] X

[04] NOT (05)

[05] Provable (03)(01) // **cycle indicate infinite evaluation loop**

**Cycles in directed graphs of logical expressions are erroneous:**

https://en.wikipedia.org/wiki/Occurs_check

A naive omission of the occurs check leads to the creation of cyclic structures and may cause unification to loop forever.

**Programming in Prolog (2003) by Clocksin and Mellish page 254**

equal(X, X).

?- equal(foo(Y), Y).

… **match a term against an uninstantiated subterm of itself**. In this example, foo(Y) is matched against Y, which appears within it. As a result, Y will stand for foo(Y), which is foo(foo(Y)) (because of what Y stands for), which is foo(foo(foo(Y))), and so on. So Y ends up standing for some kind of infinite structure.

https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom

**(G) F ⊢ G**_{F}** ¬Prov**_{F}**(⌈G**_{F}**⌉)**

**Match a term against an uninstantiated subterm of itself: **

Equivalent( GF, ¬ProvF( Godel_Number(GF) )

Equivalent( GF, ¬ProvF(GF) )

Equivalent( ¬ProvF(GF), GF )

Equivalent( ProvF(GF), GF )

?- equal(foo(Y), Y).

**Can you see how the above two are the same?**

In order to mathematically formalize every aspect of the Liar Paradox as predicate logic, I had to enhance the way that predicate logic works.

**Defining Self-Reference Semantics <assign alias name> operator**

LHS is assigned as an alias name for the RHS

**LHS ≡ RHS**

The LHS is logically equivalent to the RHS * only Because *the LHS is merely an alias name for the RHS

**LiarParadox ≡ ~True(LiarParadox). **

The name and alias name operator are merely notational conventions for translating the meaning: **{This sentence is not true}** into predicate logic.

**Translation into directed acyclic graph**

**01 LiarParadox (02)
02 Not (03)
03 True (01) **// infinite evaluation loop

**Here is how {alias name} works in computer science:**

In computer science variable names to refer to memory addresses. Variable names are considered one and the same thing as the memory address. When a computer program is compiled the names are translated into these memory addresses and discarded.

**// The Liar Paradox in C++ **

Test the value of a variable before this variable has a value

then assign this non-existent value to this variable.

**bool Liar_Paradox = (Liar_Paradox == false);**

This statement is semantically analogous to the one above

in that the quotient of division by zero does not exist.

**double BadNumber = 1.0 / 0.0;**

Copyright 2016, 2017 2018 Pete Olcott

]]>2.2 The fixed point theorem

GL ⊢ B A(B)

GL ⊢ LiarParadox ~True(LiarParadox)

Do you see the similarity? Prolog rejects both of these as erroneous.

http://liarparadox.org/Prolog_Detects_Pathological_Self_Reference.pdf

**Here is the error that Prolog detects: (see above link)**

**match a term against an uninstantiated subterm of itself.**

The exact same thing as this C++

bool LiarParadox = LiarParadox == false;

Copyright 2017 Pete Olcott

Copyright 2016, 2017 Pete Olcott

]]>A formula

of a set

https://en.wikipedia.org/wiki/Sentence_(mathematical_logic)

In mathematical logic, **a sentence of a predicate logic** is a boolean-valued well-formed formula with no free variables. A sentence can be viewed as expressing a proposition, something that **must be true or false.**

https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom

The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out,

**there are statements of the language of F which can neither be proved nor disproved in F:** **Sentence _{F}(G) ∧ ~Provable_{F}(G) ∧ ~Refutable_{F}(G)**

**G @ ~∃Γ ⊆ F Provable(Γ, G)** // Minimal Type Theory

(1) The above * expression* says that it is not

(2) The above

(3) However the above

**Sentence translated into a Directed Acyclic Graph by the MTT compiler:**

[01] G (02)(07)

[02] NOT (03)

[03] THERE_EXISTS (04)

[04] SUBSET_OF (05)(06)

[05] Γ

[06] F

[07] Provable (05)(01) **// cycle indicates infinite evaluation loop**

The YELLOW highlighted portions form the foundational basis for what I mean by Provable(X).

The paragraph following the YELLOW highlighted paragraph directly states what I mean by Provable.

The remaining material establishes common notational conventions that are essentially the same as I have been using.

]]>Logical_consequence#Syntactic_consequence

**Direct quote from above Wikipedia link:**

A formula ** A** is a syntactic consequence within some formal system

**In other words:** Formula A is Provable within Formal System FS if there exists an inference chain (connected sequence of WFF) from a set of Formula Γ to Formula A.

Furthermore if Γ are axioms of FS, then Provable(Γ, A) ≡ True(A) in FS:

**True _{FS}(A) ≡ Provable_{FS} (Γ, A) **

Last paragraph copyright 2017 Pete Olcott

]]>The first incompleteness theorem states that in any consistent formal system

**When Godel said that he found a sentence G of language F such that:
~(Provable_{F}(G) ∨ Refutable_{F}(G) ) he was wrong.**

https://plato.stanford.edu/entries/goedel-incompleteness/#FirIncTheCom

**(G) F ⊢ G _{F} ¬Prov_{F}(⌈G_{F}⌉) **

1931 GIT entirely summed up as a single line of Minimal Type Theory:

**G @ ∃X ~Provable(X, G)**

(1) The above * expression* says that it is not

(2) The above

(3) However the above

Sentence translated into a Directed Acyclic Graph by the MTT compiler:

[01] G (02)(04)

[02] THERE_EXISTS (03)

[03] X

[04] NOT (05)

[05] Provable (03)(01) // **cycle indicate infinite evaluation loop**

If G is an alias (sentential variable) for ∃X (X ⊢ Y), then G is either True or False, even if both X and Y are gibberish.

If G is an alias (sentential variable) for ~∃X (X ⊢ G) we have pathological self-reference creating infinite recursion. This prevents G from ever evaluating to either True or False, thus making G semantically ill-formed.

copyright 2017 Pete Olcott (Created Friday October 13, 2017)

**Originally posted material is shown below**

If the 1931 Incompleteness Theorem is correct then the much simpler expression at the end of this post equally proves that there are some sentences of predicate logic that are neither provable nor refutable.

Syntactic_Logical_Consequence (Γ proves A)

A formula A is a syntactic consequence within some formal system FS of a set Γ of formulas if there is a formal proof in FS of A from the set Γ:

**Γ ⊢ _{FS} A**

Translation to Minimal Type Theory notational conventions

**Γ ⊢ _{FS} A ≡ ( Γ ⊂ FS (Γ ⊢ A) )**

Named predicate (of predicate logic PL) impossible to prove and impossible to refute:** G( ~∃Γ ⊂ PL (Γ ⊢ G) )**

The first incompleteness theorem states that in any consistent formal system F within which a certain amount of arithmetic can be carried out, there are statements of the language of F which can neither be proved nor disproved in F.

As long as the above sentence accurately sums up the GIT no greater understanding of the GIT is required to totally refute it:

(1) Every WFF of all formal systems must be a truth bearer.

(2) If no formal proof exists within a formal system to show that an expression evaluates to exactly one of the set: {true, false}, then this expression is not a truth bearer, and not a WFF in this formal system.

(3) Therefore there are no WFF in any formal system which can be neither proved nor disproved within this formal system.

]]>**The yellow highlighted pages of the provided book show two things:**

(1) Solving the Liar Paradox is crucially important to fixing the currently inconsistent understanding of the concept of Truth.

(2) Prior to the solution provided on this website it was only understood that the Liar Paradox is defective the precise nature of this defect was unknown.

]]>