Abstracts of David Wittenberg's Publications


  1. Jonathan B. Wittenberg, William Wittenberg, and David K. Wittenberg,
    Roles of Blood Plasma and Erythrocytes in Secretion of Inert Gas into the Teleost Swimbladder (abstract).
    Biological Bulletin, 151(2):434--435, Oct. 1976.

  2. David Mauzerall and David Wittenberg,
    The Size of the Photosynthetic Unit and its Turnover Time in Various Seaweeds (abstract).
    Biological Bulletin, , 157(2):382, Oct. 1979.

  3. William Wittenberg, David K. Wittenberg, and Jonathan B. Wittenberg,
    Secretion of Nitrogen into the Swimbladder of Fish I. Secretion by Fishes Nearly Lacking Circulating Hemoglobin. Role of the Rete Mirabale.
    Biological Bulletin, 161(3):426--439, Dec. 1981.

  4. David K. Wittenberg, William Wittenberg, Jonathan B.Wittenberg and Nobutomo Itada,
    Secretion of Nitrogen into the Swimbladder of Fish II. Molecular Mechanism. Secretion of Noble Gasses.
    Biological Bulletin, 161(3):440--451, Dec. 1981.

  5. Michael J. Fischer, Silvio Micali, Charles Rackoff, and David K. Wittenberg,
    Proof of an Oblivious Transfer Protocol, or: A Proven Oblivious Transfer Protocol.
    Presented at Conference on the Mathematical Theory of Security, MIT Endicott house, Dedham, Massachusetts, 1985.

    Rabin proposed the oblivious transfer problem in which Alice tries to send a message to Bob which Bob receives with 50% probability, and Alice doesn't know whether or not Bob received it. We present a formal definition of ``Oblivious Transfer of Prime Factorization'' and expose a potential flaw in Rabin's oblivious transfer protocol. Using Goldwasser, Micali and Rackoff's ``interactive proof'' technique, we modify Rabin's protocol into one which we prove correct.

  6. David K. Wittenberg and Jerrold S. Leichter,
    System for controlling access to a secure system by verifying acceptability of proposed password by using hashing and group of unacceptable passwords.
    United States Patent 5,204,966, 1993. text

    Methods and apparatus for verifying the acceptability of a password proposed by a user of a secure system. The system stores a compressed version of a group of unacceptable passwords in a table of indicators. A mapper assigns indicators to passwords, such that more than one password may be mapped to a indicator. To initialize the system, an initializer applies the mapper to each unacceptable password of the group, and sets the indicators of the table that are assigned to each unacceptable password. Subsequently, a verifier applies the mapper to a proposed password and checks whether the indicator assigned to the proposed password is set. If the indicator is not set, it is determined without error that the proposed password is not in the group of unacceptable passwords, and may thus be assigned privileges in the secure system

  7. David K. Wittenberg,
    Reducing the Randomness Requirements for Quantum Money.
    Technical report CS-95-177, Computer Science Department, Brandeis University, 1995. ps, pdf

    The original description of Quantum Money used two truly random bits for each photon stored. We show that the amount of randomness required is considerably less than that. By using game theory to analyze a quantum cryptographic protocol, we show that the protocol's security is remarkably insensitive to bias in one of the "random" bits. First we show that there is no loss in security if that bit produces a "1" between one quarter and three quarters of the time. Secondly, we show that it is possible to maintain some security even if that bit is absolutely predictable. This allows us to make uncopyable tokens using only 2 quantum states instead of the 4 which have previously been used, though one must still be able to detect all 4 states.

  8. Timothy J. Hickey and David K. Wittenberg,
    Validated Constraint Compilation.
    Technical Report CS-99-201, Computer Science Department, Brandeis University, 1999. pdf

    We propose a method of using validated interval constraint contraction operators to build routines for numerical computation libraries. We illustrate the method by using it to construct a procedure that efficiently computes a small interval containing f-1(h) where f(x) = x sin(x). This can be transformed into a numerical routine by returning the midpoint of the interval and signaling an exception if the relative width of the interval exceeds a specified bound. We compare the interval method with a strictly numerical approach. The chief advantage of our method is that it results in a procedure which returns an interval that is guaranteed to contain the correct solution (assuming of course that the hardware and the compiler meet the necessary specifications). By automating those parts of the computation which could effect the correctness of the procedure, we reduce the number of places where the programmer can err. The user is free to design complex algorithms for solving the problem at hand, with the restriction that the algorithm consists of applying a sequence of validated constraint contractors which are constructed automatically from the constraint set specifying the problem. Modulo assumptions about the correctness of the contractors, the only error that the user can make is to produce a procedure which returns intervals that are too large too be useful. This is in contrast to traditional methods which always return answers with 52 bits of precision but with little or no indication of how many of those bits are correct.

  9. Timothy J. Hickey and David K. Wittenberg,
    Validated Constraint Compilation.
    in Joxan Jaffar, editor, Principles and Practice of Constraint Programming - CP'99 volume 1713 of Lecture Notes in Computer Science, pages 482--483, 1999. ps, pdf A longer version is [8]

  10. Timothy J. Hickey and David K. Wittenberg
    Using Analytic CLP to Model and Analyze Hybrid Systems
    Technical Report CS-03-240, Computer Science Department, Brandeis University, 2003, pdf A later version is [13].

    We use CLP(F), an Analytic Constraint Logic Programming language, to model hybrid systems. Analytic CLP languages combine intervals, constraints, and ODEs in a clean and natural way. CLP(F) provides a implementation of an ACLP language based on Interval Arithmetic. The semantics of CLP(F) rigorously handle non-linear ODEs and round-off error. The ODEs describing a hybrid system need only a minor change of syntax to become a CLP(F) program. This simple transformation from a physical description of a hybrid system to a program which can be used to provide a proof of safety properties of the system bridges the gap between practical tools and formal models, and allows one to easily prove statements about real-world systems. The combination of Interval Arithmetic with Analytic Constraint Logic Programming makes it easy to pose and answer many sorts of queries about a system. For example, "At what point does the system change from one state to another?", or "What control settings result in a cycle with period t?"

  11. Timothy J. Hickey and David K. Wittenberg
    Rigorous Modeling of Hybrid Systems using Interval Arithmetic Constraints
    Technical Report CS-03-241, Computer Science Department, Brandeis University, 2003, pdf A later version is [14].

  12. David K.Wittenberg
    On Unverifiable Facts
    Brandeis Graduate Journal 1(1), 2003 pdf

    We consider physical facts which cannot be verified in practice. We are not interested in classic metaphysical or philosophical arguments which cannot be decided even in principle, but in facts where the answer is in theory easy to measure, but in reality, essentially impossible to measure. Most of the time, the facts we are interested in are probabilities of events which are too rare to measure directly, or which would involve experiments which are far too damaging to contemplate. One example is estimating the probability of catastrophic failure in a space shuttle launch or nuclear reactor.

  13. Timothy J. Hickey and David K. Wittenberg,
    Using Analytic CLP to Model and Analyze Hybrid Systems
    in Valerie Barr and Zdravko Markov, editors, FLAIRS: Florida AI Research Symposium pages 269--274, AAAI Press, 2004 pdf

  14. Timothy J. Hickey and David K. Wittenberg,
    Rigorous Modeling of Hybrid Systems using Interval Arithmetic Constraints
    in Rajeev Alur and George J. Pappas, editors, Hybrid Systems: Computation and Control HSCC 2004, volume 2993 of Lecture Notes in Computer Science pages 402--416, Springer Verlag, 2004. ps, pdf

    We provide a rigorous approach to modeling, simulating, and analyzing hybrid systems using CLP(F) (Constraint Logic Programming (Functions)), a system which combines CLP (Constraint Language Programming) with interval arithmetic. We have implemented this system, and provide timing information. Because hybrid systems are often used to prove safety properties, it is critical to have a rigorous analysis. By using intervals throughout the system, we make it easier to include measurement errors in our models and to prove safety properties.

  15. Timothy J. Hickey and David K. Wittenberg,
    Notes on using CLIP,
    2004, currently a very early draft. ps, pdf

    This is a start at documenting CLIP. Eventually this will grow into a manual. CLP(I) (Constraint Logic Programming over Intervals) is a constraint logic programming (CLP) language whose domain is the real numbers (Moore's Interval Arithmetic). CLP(F) is a CLP language whose domain is analytic functions over the reals. CLP(F) is written in CLP(I). CLIP is an implementation of CLP(I) built on top of GNU Prolog. The current distribution of CLIP includes both CLP(I) and CLP(F) by default.

  16. David Karger Wittenberg,
    CLP(F) Modeling of Hybrid Systems,
    PhD thesis, Brandeis University, May 2004. pdf

    We wish to rigorously model hybrid systems. Because models of hybrid systems are often used in safety-critical applications, it is crucial to get accurate results with explicit limits on the errors in the model and calculations. This requires a rigorous approach. Any technique to model these systems which does not account for rounding error or error in the approximation to the solution of the ODEs governing the problem is not rigorous. We use CLP (Constraint Logic Programming) over interval arithmetic to provide an explicit limit on rounding errors and on the ODE solution errors.

  17. John T. Langton, Timothy J. Hickey and David K. Wittenberg,
    Leveraging Layout with Dimensional Stacking and Pixelization to Facilitate Feature Discovery and Directed Queries
    in Pierre P. Lévy, Bénédicte Le Grand, François Poulet, Michel Soto, Laszlo Darago, Laurent Toubiana, and Jean-François Vibert, editors, Pixelization Paradigm First Visual Information Expert Workshop, VIEW 2006 volume 4370 of Lecture Notes in Computer Science pages 77--91. Springer Verlag 2006 pdf

    Pixelization is the simple yet powerful technique of mapping each element of some data set to a pixel in a 2D image. There are 2 primary characteristics of pixels that can be leveraged to impart information: 1. their color and color-related attributes (hue, saturation, etc.) and 2. their arrangement in the image. We have found that applying a dimensional stacking layout to pixelization uniquely facilitates interactive data mining, directs user queries, and provides a type of visual principle components analysis. In this paper we describe our approach and how it is being used to analyze multidimensional, multivariate neuroscience data.

  18. Timothy J. Hickey and David K. Wittenberg,
    Modeling Hysteresis in CLIP -- The Tank Flow Problem,
    in Rafi L. Muhanna and Robert L. Mullen, editors REC 2006 Proceedings of the NSF Workshop on Reliable Engineering Computing: Modeling Errors and Uncertainty in Engineering Computations pages 99--112, 2006. pdf, ps

    We study the ``Two Tanks Problem'', a hybrid system described by Stursberg et al. In this paper, we expand on the use of CLIP (a Constraint Logic Programming over Intervals and Functions language) to formally describe more complex systems. We add complexity in several forms. The simplest is to have a larger system. We move from a system with two tanks to one with four tanks, and we add non-linear valves to the pipes connecting the tanks. This example easily generalizes to an N-tanks problem where the tanks, connected by pipes, form an arbitrarily complex graph. The more important addition is the refinement of the model in several places. We rigorously model a valve in which the flow varies exponentially with the valve position over much of the valve's range, and then discontinuously as the valve is almost closed. We introduce hysteresis in our analysis to avoid an infinite loop of zero-time transitions, and we discuss why our techniques should not have trouble with ``Zeno'' transitions. The possibility of Zeno behaviour (Zhang et al.) can arise either from physical reasons (a value near zero, so the sign of the changes is hard to know) or for modeling reasons (the system is near the boundary between two behaviour regimes, and while both regimes describe similar behaviour near the boundary, the model might switch between the two regimes infinitely often in a finite time). An elegant feature of our model is that we use the same technique of hysteresis to prevent the Zeno behaviour from either cause. This is easily done in CLIP by changing the conditions for a state change from one to the other to include hysteresis.

  19. Arik Z. Lakritz, Peter Macko, David K. Wittenberg,
    Simple Additive LSB Steganography in Losslessly Encoded Images, 2007. pdf
    Kurak and McHugh [7] described LSB encoding, a steganographic technique to hide data in the low order bits of an image. Moskowitz, Longdon, and Chang [10] point out that most pictures have areas where the low order bits are clearly non-random, so if those bits are replaced with (apparently) random bits, the replacement is easily detected. Other attacks on LSB encoding are based on the observation that even lo w order bits which appear random when viewed alone correlate with the higher order bits in the region. Several groups [8, 9, 2, 4 ] have showed how to detect steganography using this fact. We introduce XLSB, which solves both problems without complicating the decoder, the first by avoiding the areas of an image in which the low order bits are non-random, and the second by adding small values to the data instead of replacing the low order bits. We give test results showing that several steganalysis techniques do not detect XLSB, even at very high data rates (∼4 bpp). This is several hundred times the data rate at which the steganalysis techniques detect ordinary LSB encoding.

  20. Udit Dhawan, Albert Kwon, Edin Kadric, Cătălin Hriţcu, Benjamin C. Pierce, Jonathan M. Smith, Gregory Malecha, Greg Morrisett, Thomas F. Knight, Jr., Andrew Sutherland, Tom Hawkins, Amanda Zyxnfryx, David Wittenberg, Peter Trei, Sumit Ray, Greg Sullivan, André DeHon,
    Hardware Support for Safety Interlocks and Introspection,
    Adaptive Host and Network Security Workshop, 2012 pdf
    Hardware interlocks that enforce semantic invariants and allow fine-grained privilege separation can be built with reasonable costs given modern semiconductor technology. In the common error-free case, these mechanisms operate largely in parallel with the intended computation, monitoring the semantic intent of the computation on an operation-by-operation basis without sacrificing cycles to perform security checks. We specifically explore five mechanisms: (1) pointers with manifest bounds (fat pointers), (2) hardware types (atomic groups),(3) processor-supported authority, (4) authority-changing procedure calls (gates), and (5) programmable metadata validation and propagation (tags and dynamic tag management). These mechanisms allow the processor to continuously introspect on its operation, efficiently triggering software handlers on events that require logging, merit sophisticated inspection, or prompt adaptation. We present results from our prototype FPGA implementation of a processor that incorporates these mechanisms,quantifying the logic, memory, and latency requirements. We show that the dominant cost is the wider memory necessary to hold our metadata (the atomic groups and programmable tags), that the added logic resources make up less than 20% of the area of the processor, that the concurrent checks do not degrade processor cycle time, and that the tag cache is comparable to a small L1 data cache.

  21. Silviu Chiricescu, Andre ́ DeHon, Delphine Demange, Suraj Iyer, Aleksey Kliger, Greg Morrisett, Benjamin C. Pierce, Howard Reubenstein, Jonathan M. Smith, Gregory T. Sullivan, Arun Thomas, Jesse Tov, Christopher M. White, David Wittenberg
    SAFE: A Clean-Slate Architecture for Secure Systems
    IEEE Conference on Technologies for Homeland Security 2013 pdf
    SAFE is a large-scale, clean-slate co-design projectencompassing hardware architecture, programming languages, and operating systems. Funded by DARPA, the goal of SAFE is to create a secure computing system from the ground up. SAFE hardware provides memory safety, dynamic type checking, and native support for dynamic information flow control. The Breeze programming language leverages the security features of the underlying machine, and the “zero kernel” operating system avoids relying on any single privileged component for overall system security. The SAFE project is working towards formally verifying security properties of the runtime software. The SAFE system sets a new high-water mark for system security, allowing secure applications to be built on a solid foundation rather than on the inherently vulnerable conventional platforms available today.

  22. Jeffrey Smith, Basil Krikeles, David K. Wittenberg, Mikael Taveniku
    Applied Vulnerability Detection System
    IEEE Conference on Technologies for Homeland Security 2015 pdf
    In [1], we presented a Vulnerability Detection System (VDS) that can detect emergent vulnerabilities in complex Cyber Physical Systems (CPS). It used the attacker’s point of view by collecting a target system’s vulnerability information from varied sources, and populating a Attack Point (AP) database. From these APs, a Hierarchical Task Netw ork generated the set of composite device-level attack scenarios. The VDS used Alloy [2] to reduce the cardinality of the generated space by evaluating the feasibility of each attack. This paper specializes prior research by submitting the generated prioritized list to an automotive specific Attack Evaluation Process (AAEP). With a combination of simulation and vehicle instrumented real-time execution, the AAEP confirms each candidate attack. The AAEPs output is used as feedback to refine the Alloy model VDS is designed to support short product release cycles. The AAEP separates domain-specific from domain-independent aspects so the VDS can be rapidly retargeted.

  23. Joseph Fahey, Howard Reubenstein, David Wittenberg, Gregory Sullivan
    Benefits of Deploying Inherently Secure Nodes Within a Distributed System
    IEEE Symposium on Software Technologies 2015 pdf

    We present simulation results that show that placing security-related intrusion monitors on SAFE processors (as opposed to on vulnerable conventional systems) dramatically increases the overall health of a distributed system. We model a distributed system protected by a combination of network- and host-based detection and remediation infrastructure. The model is parameterized according to the accuracy of attack detection, the likelihood of attack success, and the ratio of security-related resources to general computation resources.
  24. David K. Wittenberg, Jeffrey Smith, Robert Gray, Gregory Eakman
    Automotive Vulnerability Detection System
    Embedded Security in Cars 2016 pdf
    In [1] we presented a Vulnerability Detection System (VDS) that can detect emergent vulnerabilities in complex Cyber Physical Systems (CPSs). It used the attacker’s point of view by collecting a target system’s vulnerability information from varied sources and populating an Attack Point (AP) database. From these APs, a Hierarchical Task Network (HTN) generated the set of composite device-level attack scenarios. The VDS used Alloy [2], a Satisfiability (SAT) planner to reduce the cardinality of the generated space by evaluating the feasibility of each attack. In [3], we specialized the VDS for the automobile domain. This paper further 1) specializes our prior research by submitting the generated prioritized list to an Automotive-specific Attack Evaluation Process (AAEP) and 2) enhances our prior research with a method to discover and test vulnerabilities by reverse engineering the actual binary code. With a combination of simulation and vehicle instrumented real-time execution, the AAEP confirms each candidate attack. The AAEP’s output is used as feedback to refine the SAT constraint model model. A novel part of AAEP is our Automated Reverse Engineering (ARE) system, which greatly reduces the search space for software bugs. The VDS is designed to support short product release cycles.

  25. David K. Wittenberg, Edin Kadric, Andre ́ DeHon, Jonathan Edwards, Jeffrey Smith, Silviu Chiricescu
    PERFECT Case Studies Demonstrating Order of Magnitude Reduction in Power Consumption
    High Performance Extreme Computing 2016 paper - pdf
    slides - pdf
    slides - powerpoint
    We propose three methods for reducing power consumption in high-performance FPGAs (field programmable gate arrays). We show that by using continuous hierarchy memory, lightweight checks, and lower chip voltage for near-threshold voltage computation, we can both reduce power consumption and increase reliability without a decrease in throughput.

    We have implemented these techniques in two different, realistic wide-area motion imagery algorithms on FPGAs. We demon- strated greatly improved performance/efficiency compared to two flight-tested platforms, getting up to a 250X reduction in power use (measured in giga operations per second per watt). This paper summarizes these two case studies.