The science behind our Phase Change innovation
At Phase Change, we accelerate the software engineering process, support revolutionary new capabilities, and dramatically increase productivity, quality, and speed-to-market.
Of course, you are skeptical. We expect you to have questions. These claims have been made before.
We went all the way back to the genesis of computation theory and realized that while the original science is valid, it is being misinterpreted.
It is in the math and the science — the deep science — where we treat software differently and get fundamentally different results.
Georg Cantor and the order of infinities
In 1874, mathematician Georg Cantor invented set theory, which became a fundamental tenant of modern mathematics. He defined the concepts of infinity and well-ordered sets, and proved that the real numbers are a larger order of infinity than the natural numbers.
Since the natural numbers map one-to-one with objects in the physical world, Cantor’s work expanded the domain of mathematics from the experience of the physical universe to any universe conceived by the mathematician’s mind.
Cantor’s conception of set theory and infinity became the dominant paradigm in modern mathematics and was used without question by all early computational theorists.
Turing, Rice and the Halting problem
In 1928, David Hilbert challenged the mathematics community to address the following question: Could any logical proposition be validly answered? In other words, could the proposition be decided? This was called the Halting problem. It is the origin of our modern concept of decidability.
In a paper published in 1936, Alan Turing proved that a general algorithm to decide Halting cannot exist. His formulation proved that the existence of such an algorithm would lead to a logical paradox, and, therefore, Halting is undecidable.
In 1953, as the digital computer era began, and programming became a practical problem, Henry Gordon Rice built upon Turing’s proof to show that undecidability generalized to all non-trivial, semantic properties of programs.
Turing’s proof, and Rice’s derivative work exploited Cantor’s concepts of orders of infinity to prove logical inconsistencies, or paradoxes. Since Turing’s work preceded digital computers, he used a paper computational framework in his proof — a Turing machine — and enumerated all possible Turing machines.
He mapped all naturally possible Turing machines to the infinity of the natural numbers, and then created a counter-example in a higher order of infinity using the mathematical technique of diagonalization. In short, the crucial logical inconsistency, the paradox, is a Turing Machine in a higher order of infinity.
Do these theories about infinity and an infinite number of programs apply to finite programs in the real world?
Kronecker rejects notions of infinite sets
In 1886, Leopold Kronecker, a wealthy business man and professor at the University of Göttingen, rejected Cantor’s notions of infinite sets and irrational numbers.
He maintained that a theory’s logical correctness does not imply the existence of the entities it purports to describe, and that they remain devoid of any significance unless they can actually be produced.
If we accept Kronecker’s rejection of infinite sets, how does that change our interpretation and application of Turing’s and Rice’s theorems?
What if we reinterpret history?
In addition to other theorems, L.E.J. Brouwer introduced intuitionism in the 1920s. Simply stated, intuitionism is a foundational mathematical philosophy that math is purely the result of human constructive mental activity.
We began to wonder; what if, instead of using Cantor’s abstract math to analyze physical computations, we use Kronecker’s and Brouwer’s real-world math to understand real world software programs?
This question led us to the foundational science that unlocks the meaningful knowledge in software.
“To understand the development of the opposing theories existing in this field one must first gain a clear understanding of the concept 'science'; for it is as a part of science that mathematics originally took its place in human thought.”
Simon and Newell establish the first AI lab
We combine the right math with AI innovations to change the essence of the software development process.
Herbert Simon, Allen Newell, and others pioneered AI in the 1950s. Then, while working at MIT’s AI Laboratory, Marvin Minsky and Seymour Papert proposed that AI research focus on developing programs capable of intelligent behavior in artificially simple situations. These situations came to be known as micro-worlds.
Following this approach, we normalize software into a formal world. Like early AI’s micro worlds, our normalized representation is finite, and quite obviously maps to a subset of reality – a subset of natural numbers.
Proofs that use concepts of infinity and higher orders of infinity simply do not work in this context. Diagonalization cannot be usefully applied because counter-examples in alternative mathematical universes have no practical significance in our reality.
Correspondingly, one can easily see the usefulness of the constructionist and intuitionist mathematics; software physically embodies the mental constructions of the programmer.
In dynamic execution, software mechanically executes the construction of data states intended by the programmer. This all conforms to intuitionist and constructionist mathematics.
Reveal the simplicity in overwhelming complexity
In 2010, during a time of significant transition in AI research, Peter Norvig and his colleagues at Google published an influential paper urging machine-translation and speech-recognition researchers to reject theory development and instead “embrace complexity and make use of the best ally we have: the unreasonable effectiveness of data.”
Data science algorithms learn about human intention from data patterns, and then use this learning to create new algorithms that capture the essence of the intention, such as natural-language comprehension.
Phase Change does the same with software. The hurdle we faced was that data-science algorithms, by definition, require a formalism and machine-interpretation of meaning. To write an algorithm on floating-point numbers, floating-point arithmetic has to conform to the rules of arithmetic.
How does one transform programs into data representations that are similar to floating-points for numbers or strings for text? They must have formal operations like arithmetic and concatenation. The representations and operations must capture the semantics of what human engineers intend when they write and manipulate programs. This is no mean feat.
One can now see that constructionist mathematics is essential. It is a mathematical theory that is amenable to representing the programmer’s intention, and the consequent behavior of programs.
Thus, we have hurdled the barrier, transforming software into data. This makes the software variant of Norvig’s complexity amenable to data science and modern AI.
Chaos to Coherence
We are getting fundamentally different results because we are doing something fundamentally different.
Enlightened by the science and scientists that came before us, we are changing the essence of software, turning chaotic code into coherent data.
We transform intractable and hard-to-understand software into artificially intelligent agents that actively assist every role in every stage of the software development process.
For a summary of the deep science behind Phase Change’s AI technology, listen to Founder and CEO Steve Bucuvalas’ podcast: The Turing Maching, the Halting Problem, and Rice’s use of the Turing Proof, and download his technical paper: An Analogy: Software AI and Natural Language.