Partial Inconsistency and Vector Semantics of Programming Languages




This page timeline goes back to 2012-2013, and starts with an observation that for approximation domains to become vector spaces it is necessary to have enough overdefined (that is, partially inconsistent) elements to cancel on addition with underdefined (partially defined) elements to produce zero.

A rich mathematical theory (partial inconsistency landscape) results from this observation.

Then in 2014 this page progresses towards studying computations with linear streams. In 2015 we understand that if one maintains a discipline of interleaving linear and non-linear transformations, large classes of programs are parametrized by matrices of numbers. (We call this architecture dataflow matrix machines in 2016). Our first related software prototypes appear in 2015 and our GCAI 2015 paper is published.

In 2016 we realize that dataflow matrix machines can be viewed as generalized recurrent neural networks and study them as a model of computation and as a programming architecture. Starting from the Fall of 2016, we create a particularly elegant flavor of dataflow matrix machines based on the vector space of finite prefix trees with numerical leaves and variadic neurons and create a Clojure implementation of the core primitives for this flavor of dataflow matrix machines.


Our new paper: Michael Bukatin and Jon Anthony. Dataflow Matrix Machines and V-values: a Bridge between Programs and Neural Nets. In "K + K = 120" Festschrift, 2017.




Partial inconsistency landscape

Partial inconsistency, bilattices, ordered Banach spaces of signed measures, negative probabilities, negative self-distances in partial metrics, probabilistic programming, vector space semantics of programming languages, non-monotonic inference, and bitopology seem to play nicely together here.

It seems that the main cornerstones here are the ability to extend some Scott domains to groups and vector spaces, and the "bilattice pattern".


Our Tbilisi paper: Michael Bukatin and Steve Matthews. Linear Models of Computation and Program Learning. GCAI 2015, EasyChair Proceedings in Computing, 36, 66-78.

Brief summary

There are two motivations for this line of research. One is the desire to handle partially contradictory and inconsistent information. Another is the desire to have algebraic structure of groups and of vector spaces on partially defined elements (and on programs).

Both motivations lead to very rich mathematical phenomena and bring together a number of earlier studies done by various researchers. The spaces of partially defined elements are enriched with over-defined elements. Expressions of partial inconsistency include negative probabilities and segments of negative length. The motive of decompositions into positive and negative parts is quite ubiquitous in this context. Non-monotonic and anti-monotonic inference make appearances. Interval numbers enriched with segments of negative length form a group. Probabilistic programs denote linear operators when one allows the measures to take not only positive, but also negative values. Bilattices and bitopologies, paraconsistent and modal logic, and ``possible worlds" models make multiple appearances.

Many of these ideas are not new. What seems to be new is the realization that this variety of ideas forms a unified landscape where various parts interplay each other in many interesting and intricate ways. This field seems to be well-positioned for rapid mathematical development and for emergence of various applications including better handling of inconsistency in databases, better handling of non-monotonic reasoning and reasoning with contradictory information in AI, better schemes for probabilistic programming, and better machine learning schemes over spaces of programs. The classes of computations which admit taking linear combinations of execution runs, such as probabilistic sampling and generalized animation, are of particular interest in connection with better schemes for program learning.


The slides for my talk at CCNY Joint Math-CS Colloquium, April 11, 2013: Partial inconsistency and mathematics of software (joint work with Ralph Kopperman and Steve Matthews).

(Also, the slides for my talk at the "Conference on generalized metrics for limits, computing, and more" at Medgar Evers College of CUNY, April 10, 2013: Looking for common patterns.)

The abstract and slides for my talk "Partial inconsistency landscape: an overview" on July 24, 2013 at 28th Summer Conference on Topology and its Applications in North Bay, Ontario.

Michael Bukatin, Ralph Kopperman, and Steve Matthews. Remarks on Partially Inconsistent Interval Numbers. Preprint, November 26, 2013. (Lightly edited on June 25, 2014.)


Here we start talking about potential applications of vector semantics to program learning.


Michael Bukatin, Ralph Kopperman, and Steve Matthews. Partial Inconsistency and Vector Semantics. Preprint, March 14, 2014. (Lightly edited on June 25, 2014.)

The abstract for my talks "Partial inconsistency, bitopology, and vector semantics" on July 24, 2014 at 29th Summer Conference on Topology and its Applications in Staten Island, New York.

The slides for the first of these talks, "Partial inconsistency and bitopology".

The slides for the second of these talks, "Partial inconsistency and vector semantics: sampling, animation, and program learning".


The abstract and the slides for my talk "Progress report on partial inconsistency, bitopology, and vector semantics" on November 7, 2014 at a conference on Computational Topology and Its Applications in Kent, Ohio.


Linear combinations of computations

I know two classes of computations which admit taking linear combinations of execution runs:

Classes of computations which admit taking linear combinations of execution runs are of particular interest in connection with better schemes for program learning.

Michael Bukatin and Steve Matthews. Linear Models of Computation and Program Learning. Preprint, April 6, 2015. (Lightly edited on May 30, 2015.)

Accepted for publication at GCAI 2015 (Proceedings) on August 10, 2015. The final version (September 3, 2015). The slides for GCAI talk.


Almost continuous transformations of dataflow programs

Michael Bukatin and Steve Matthews. Almost Continuous Transformations of Software and Higher-order Dataflow Programming. Preprint, July 9, 2015.


Dataflow Graphs as Matrices

Michael Bukatin and Steve Matthews. Dataflow Graphs as Matrices and Programming with Higher-order Matrix Elements. Preprint, August 20, 2015. (Section 4.1 added on August 27, 2015.)


The abstract and slides for my talk, "Linear Models of Computation and Parametrization of Large Classes of Programs by Matrices", at NEPLS 28 on November 10, 2015.


Dataflow Matrix Machines as a Generalization of Recurrent Neural Networks

https://arxiv.org/abs/1603.09002

https://arxiv.org/abs/1605.05296

https://arxiv.org/abs/1606.09470

https://arxiv.org/abs/1610.00831


The series of 4 preprints above is joint work by Michael Bukatin, Steve Matthews, and Andrey Radul. This material was presented during the following two talks:

The abstract and slides for my talk "Vector semantics: from partial inconsistency and bitopology to recurrent neural networks and self-referential dataflow matrix machines" on August 2, 2016 at 31st Summer Conference on Topology and its Applications in Leicester, England.

The abstract and slides for my talk, "Self-referential Mechanism for Dataflow Matrix Machines and Generalized Recurrent Neural Networks", at NEPLS 30 on October 7, 2016.


Dataflow Matrix Machines in Clojure

Clojure inspired us to consider a version of dataflow matrix machines based on the vector space of finite prefix trees with numerical leaves and variadic neurons.

Michael Bukatin and Jon Anthony. Dataflow Matrix Machines as a Model of Computations with Linear Streams. Preprint, Apr. 8-9, 2017; minor corrections: Apr. 14 - May 1, 2017. To appear at LearnAut 2017.

arXiv version: https://arxiv.org/abs/1706.00648

The abstract and slides for my talk "Vector space of finite prefix trees for dataflow matrix machines" on March 10, 2017 at 51st Spring Topology and Dynamical Systems Conference in Jersey City, New Jersey. Photos of the whiteboard and the background of the conference (originals are downloadable).

NEML 2017: abstract and poster (May 12, 2017).

Slides for our "Dataflow matrix machines and V-values" lightning talk at July 13, 2017 Boston Clojure meetup.



Reference paper on Dataflow Matrix Machines and V-values

Michael Bukatin and Jon Anthony. Dataflow Matrix Machines and V-values: a Bridge between Programs and Neural Nets. In: Beáta Gyuris, Katalin Mády, and Gábor Recski, editors, "K + K = 120: Papers dedicated to László Kálmán and András Kornai on the occasion of their 60th birthdays", Research Institute for Linguistics, Hungarian Academy of Sciences, 2017.

Video and slides of the associated talk.



A one-page primer on self-modifying dynamical systems (self-modifying neural nets/self-modifying continuous programs; February 26, 2017).



A one-page overview of dataflow matrix machines (November 25, 2016).



Dataflow Matrix Machines (since 2017)



Back to My Papers in Computer Science