Rabin proposed the oblivious transfer problem in which Alice tries to send a message to Bob which Bob receives with 50% probability, and Alice doesn't know whether or not Bob received it. We present a formal definition of ``Oblivious Transfer of Prime Factorization'' and expose a potential flaw in Rabin's oblivious transfer protocol. Using Goldwasser, Micali and Rackoff's ``interactive proof'' technique, we modify Rabin's protocol into one which we prove correct.
Methods and apparatus for verifying the acceptability of a password proposed by a user of a secure system. The system stores a compressed version of a group of unacceptable passwords in a table of indicators. A mapper assigns indicators to passwords, such that more than one password may be mapped to a indicator. To initialize the system, an initializer applies the mapper to each unacceptable password of the group, and sets the indicators of the table that are assigned to each unacceptable password. Subsequently, a verifier applies the mapper to a proposed password and checks whether the indicator assigned to the proposed password is set. If the indicator is not set, it is determined without error that the proposed password is not in the group of unacceptable passwords, and may thus be assigned privileges in the secure system
The original description of Quantum Money used two truly random bits for each photon stored. We show that the amount of randomness required is considerably less than that. By using game theory to analyze a quantum cryptographic protocol, we show that the protocol's security is remarkably insensitive to bias in one of the "random" bits. First we show that there is no loss in security if that bit produces a "1" between one quarter and three quarters of the time. Secondly, we show that it is possible to maintain some security even if that bit is absolutely predictable. This allows us to make uncopyable tokens using only 2 quantum states instead of the 4 which have previously been used, though one must still be able to detect all 4 states.
We propose a method of using validated interval constraint contraction operators to build routines for numerical computation libraries. We illustrate the method by using it to construct a procedure that efficiently computes a small interval containing f-1(h) where f(x) = x sin(x). This can be transformed into a numerical routine by returning the midpoint of the interval and signaling an exception if the relative width of the interval exceeds a specified bound. We compare the interval method with a strictly numerical approach. The chief advantage of our method is that it results in a procedure which returns an interval that is guaranteed to contain the correct solution (assuming of course that the hardware and the compiler meet the necessary specifications). By automating those parts of the computation which could effect the correctness of the procedure, we reduce the number of places where the programmer can err. The user is free to design complex algorithms for solving the problem at hand, with the restriction that the algorithm consists of applying a sequence of validated constraint contractors which are constructed automatically from the constraint set specifying the problem. Modulo assumptions about the correctness of the contractors, the only error that the user can make is to produce a procedure which returns intervals that are too large too be useful. This is in contrast to traditional methods which always return answers with 52 bits of precision but with little or no indication of how many of those bits are correct.
We use CLP(F), an Analytic Constraint Logic Programming language, to model hybrid systems. Analytic CLP languages combine intervals, constraints, and ODEs in a clean and natural way. CLP(F) provides a implementation of an ACLP language based on Interval Arithmetic. The semantics of CLP(F) rigorously handle non-linear ODEs and round-off error. The ODEs describing a hybrid system need only a minor change of syntax to become a CLP(F) program. This simple transformation from a physical description of a hybrid system to a program which can be used to provide a proof of safety properties of the system bridges the gap between practical tools and formal models, and allows one to easily prove statements about real-world systems. The combination of Interval Arithmetic with Analytic Constraint Logic Programming makes it easy to pose and answer many sorts of queries about a system. For example, "At what point does the system change from one state to another?", or "What control settings result in a cycle with period t?"
We consider physical
We provide a rigorous approach to modeling, simulating, and analyzing hybrid systems using CLP(F) (Constraint Logic Programming (Functions)), a system which combines CLP (Constraint Language Programming) with interval arithmetic. We have implemented this system, and provide timing information. Because hybrid systems are often used to prove safety properties, it is critical to have a rigorous analysis. By using intervals throughout the system, we make it easier to include measurement errors in our models and to prove safety properties.
This is a start at documenting CLIP. Eventually this will grow into a manual. CLP(I) (Constraint Logic Programming over Intervals) is a constraint logic programming (CLP) language whose domain is the real numbers (Moore's Interval Arithmetic). CLP(F) is a CLP language whose domain is analytic functions over the reals. CLP(F) is written in CLP(I). CLIP is an implementation of CLP(I) built on top of GNU Prolog. The current distribution of CLIP includes both CLP(I) and CLP(F) by default.
We wish to rigorously model hybrid systems. Because models of hybrid systems are often used in safety-critical applications, it is crucial to get accurate results with explicit limits on the errors in the model and calculations. This requires a rigorous approach. Any technique to model these systems which does not account for rounding error or error in the approximation to the solution of the ODEs governing the problem is not rigorous. We use CLP (Constraint Logic Programming) over interval arithmetic to provide an explicit limit on rounding errors and on the ODE solution errors.
Pixelization is the simple yet powerful technique of mapping each element of some data set to a pixel in a 2D image. There are 2 primary characteristics of pixels that can be leveraged to impart information: 1. their color and color-related attributes (hue, saturation, etc.) and 2. their arrangement in the image. We have found that applying a dimensional stacking layout to pixelization uniquely facilitates interactive data mining, directs user queries, and provides a type of visual principle components analysis. In this paper we describe our approach and how it is being used to analyze multidimensional, multivariate neuroscience data.
We study the ``Two Tanks Problem'', a hybrid system described by Stursberg et al. In this paper, we expand on the use of CLIP (a Constraint Logic Programming over Intervals and Functions language) to formally describe more complex systems. We add complexity in several forms. The simplest is to have a larger system. We move from a system with two tanks to one with four tanks, and we add non-linear valves to the pipes connecting the tanks. This example easily generalizes to an N-tanks problem where the tanks, connected by pipes, form an arbitrarily complex graph. The more important addition is the refinement of the model in several places. We rigorously model a valve in which the flow varies exponentially with the valve position over much of the valve's range, and then discontinuously as the valve is almost closed. We introduce hysteresis in our analysis to avoid an infinite loop of zero-time transitions, and we discuss why our techniques should not have trouble with ``Zeno'' transitions. The possibility of Zeno behaviour (Zhang et al.) can arise either from physical reasons (a value near zero, so the sign of the changes is hard to know) or for modeling reasons (the system is near the boundary between two behaviour regimes, and while both regimes describe similar behaviour near the boundary, the model might switch between the two regimes infinitely often in a finite time). An elegant feature of our model is that we use the same technique of hysteresis to prevent the Zeno behaviour from either cause. This is easily done in CLIP by changing the conditions for a state change from one to the other to include hysteresis.
David K. Wittenberg, Jeffrey Smith, Robert Gray, Gregory Eakman
Automotive Vulnerability Detection System
Embedded Security in Cars 2016
pdf
In [1] we presented a Vulnerability Detection System (VDS) that can
detect emergent vulnerabilities in complex Cyber Physical Systems
(CPSs). It used the attacker’s point of view by collecting a target
system’s vulnerability information from varied sources and populating
an Attack Point (AP) database. From these APs, a Hierarchical Task
Network (HTN) generated the set of composite device-level attack
scenarios. The VDS used Alloy [2], a Satisfiability (SAT) planner to
reduce the cardinality of the generated space by evaluating the
feasibility of each attack. In [3], we specialized the VDS for the
automobile domain. This paper further 1) specializes our prior
research by submitting the generated prioritized list to an
Automotive-specific Attack Evaluation Process (AAEP) and 2) enhances
our prior research with a method to discover and test vulnerabilities
by reverse engineering the actual binary code. With a combination of
simulation and vehicle instrumented real-time execution, the AAEP
confirms each candidate attack. The AAEP’s output is used as feedback
to refine the SAT constraint model model. A novel part of AAEP is our
Automated Reverse Engineering (ARE) system, which greatly reduces the
search space for software bugs. The VDS is designed to support short
product release cycles.
David K. Wittenberg, Edin Kadric, Andre ́ DeHon, Jonathan Edwards,
Jeffrey Smith, Silviu Chiricescu
PERFECT Case Studies Demonstrating Order of Magnitude Reduction in
Power Consumption
High Performance Extreme Computing 2016
paper - pdf
slides - pdf
slides - powerpoint
We propose three methods for reducing power consumption in
high-performance FPGAs (field programmable gate arrays). We show that
by using continuous hierarchy memory, lightweight checks, and lower
chip voltage for near-threshold voltage computation, we can both
reduce power consumption and increase reliability without a decrease
in throughput.
We have implemented these techniques in two different, realistic wide-area motion imagery algorithms on FPGAs. We demon- strated greatly improved performance/efficiency compared to two flight-tested platforms, getting up to a 250X reduction in power use (measured in giga operations per second per watt). This paper summarizes these two case studies.