Harvard Computer Science Technical Reports for 1988

Amos Israeli and Ming Li. 1988. ``Bounded Time-Stamps.''
Time-stamps are labels which a system adds to its data items. These labels enable the system to keep track of temporal precedence relation among its data elements. Traditionally time- stamps are used as unbounded numbers and inevitable overflows cause a loss of this precedence relation. In this paper we develop a theory {\it bounded time-stamps} . Time-stamp systems are defined and the complexity of their implementation is fully analyzed. This theory gives a general tool for converting time- stamp based protocols to bounded protocols. The power of this theory is demonstrated by a novel, conceptually simple, protocol for a multi-writer atomic register, as well as by proving, for the first time, a non-trivial lower bound for such a register.
Ming Li and Paul M.B. Vitanyi. 1988. ``Two Decades of Applied Kolmogorov Complexity.''
This exposition is a survey of elegant and useful applications of Kolmogorov complexity. We distinguish three areas: I) Application of the fact that some strings are compressible. This includes a strong version of Go del's incompleteness theorem. II) Lower bound arguments which rest on application of the fact that certain strings cannot be compressed at all. Applications range from Turing machines to electronic chips. III) Other issues. For instance, the foundations of Probability Theory, a priori probability, and resource-bounded Kolmogorov complexity. Applications range from NP-completeness to inductive inference in Artificial Intelligence.
Joe Marks, Robert J. Walsh, and Mark Friedell. 1988. ``Hybrid Beam-Ray Tracing.''
Ray tracing is the most accurate, but unfortunately also the most expensive of current rendering techniques. Beam tracing, suggested by Heckbert and Hanrahan, is a generalization of ray tracing that exploits area coherence to trace multiple rays in parallel. We present a new approach to beam tracing derived from the area-subdivision technique of Warnock's hidden-surface algorithm. We use this approach to beam tracing as the basis of a hybrid beam-ray tracing algorithm. The algorithm uses beam tracing to render large coherent regions of the image, and ray tracing to render complex regions. A heuristic decision procedure chooses between beam tracing and ray tracing a given area on the basis of estimated rendering costs. Experimental results show that our hybrid algorithm is more efficient than either ray tracing or beam tracing used alone.
Bin Zhang and Meichun Hsu. 1988. ``Unsafe Operations in B-trees.''
A simple mathematical model for analyzing the dynamics of a B- tree node is presented. From the solution of the model, it is shown that the simple technique of allowing a B-tree node to be slightly less than half full can significantly reduce the rate of split, merge and borrow operations. We call split, merge, borrow and balance operations {\it unsafe} operations in this paper. In a multi-user environment, lower unsafe operation rate implies less blocking and higher throughput, even when tailored concurrency control algorithms (e.g., that proposed by [Lehman & Yao]) are used. Less unsafe operation rate also means a longer life time of an optimally initialized B-tree (e.g., compact B-tree). It is in general useful to have an analytical model which can predict the rate of unsafe operations in a dynamic data structure, not only for comparing the behavior of variations of B trees, but also for characterizing workload for performance evaluation of different concurrency control algorithms for such data structures. The model presented in this paper represents a starting point in this direction.
Meichun Hsu and Bin Zhang. 1988. ``The Mean Value Approach to Performance Evaluation of Cautious Waiting.''
We propose a deadlock-free locking-based concurrency control algorithm, called {\it cautious waiting} , which allows for a limited form of waiting, and present an analytical solution to its performance evaluation based on the mean-value approach. The proposed algorithm is simple to implement. From the modeling point of view, we are able to track both the restart rate and the blocking rate properly, and we show that to solve for the model we only need to solve for the root of a polynomial. From the performance point of view, the analytical tools developed enable us to see that the cautious waiting algorithm manage to achieve a {\it delicate balance} between restart and blocking, and therefore able to perform better (i.e., has higher throughput) than both the no waiting and the general waiting algorithms. This result is encouraging for other deadlock avoidance-oriented locking algorithms.
Gary Wayne Sabot. 1988. ``An Architecture-Independent Model for Parallel Programming.''
This dissertation describes a fundamentally new way of looking at parallel programming called the paralation model. The paralation model consists of a new data structure and a small, irreducible set of operators. The model can be combined with any base language to produce a concrete parallel language. One important goal of the model is ease of use for general problem solving. The model must provide tools that address the problems of the programmer. Equally important, languages based upon the model must be easy to compile, in a transparent and efficient manner, for a broad range of target computer architectures (for example, Multiple Instruction Multiple Data (MIMD) or Single Instruction Multiple Data (SIMD) processors; bus-based, butterfly, or grid interconnect; and so on). Several compilers based on this model exist, including one that produces code for the 65,536 processor Connection Machine. The dissertation includes a short (two pages long) operational semantics for a paralation language based on Common Lisp. By executing this code, the interested reader can experiment with the paralation constructs. The dissertation (as well as a disk containing a complete implementation of Paralation Lisp) is available from MIT Press as "The Paralation Model: Architecture-Independent Parallel Programming".
Meichun Hsu and Va-On Tam. 1988. ``Managing Databases in Distributed Virtual Memory.''

Meichun Hsu and Bin Zhang. 1988. ``Modeling Performance Impact of Hot Spots.''
An important factor that may affect the performance of the concurrency control algorithm of a database system is the skewness of the distribution of accesses to data granules (i.e., non-uniform access pattern.) In this paper, we examine the impact of hot spot analytically by employing the mean-value approach to performance modeling of concurrency control algorithms. The impact of non-uniform accesses is approached based on a new principle of {\it data flow balance} . Using data flow balance we generalize the b-c access pattern (i.e. b% of accesses directly to c% of data granules) to arbitrary distributions, and solve the non-uniform access model for two classes of two phase locking algorithms. We also show that the database reduction factor employed previously in the literature to handle non-uniform access is in fact an upper bound.
Bin Zhang. 1988. ``Analytical Preprocessing for Likelihood Ratio Methods.''
An analytical preprocessing (APP) method for likelihood ratio method, an algorithm for finding the derivatives of a performance function of Discrete Event Dynamical Systems (DEDS), is developed. The convergency of the integral used in Likelihood ratio method and the legality of interchanging $\frac {\partial} {\partial\mu} $ with $\int$ are proved for a large class of sample performance functions.
Steven Salzberg. 1988. ``Exemplar-based Learning: Theory and Implementation.''
Exemplar-based learning is a theory of inductive learning in which learning is accomplished by storing objects in Euclidean $n$- space, $E^n$, as hyper-rectangles. In contrast to the usual inductive learning paradigm, which learns by replacing symbolic formulae by more general formulae, the generalization process for exemplar-based learning modifies hyper-rectangles by growing and reshaping them in a well-defined fashion. Some advantages and disadvantages of the theory are described, and the theory is compared to other inductive learning theories. An implementation has been tested on several different domains, four of which are presented here: predicting the recurrence of breast cancer, classifying iris flowers, predicting stock prices, and predicting survival times for heart attack patients. The robust learning behavior and easily understandable representations produced by the implementation support the claim that exemplar-based learning offers advantages over other learning models for some classes of problems.
Azer A. Bestavros. 1988. ``An Algorithm for Self Diagnosis in Distributed Systems.''
In this paper, and based on a powerful diagnostic model, we present a distributed diagnosis algorithm which would make a distributed system capable of self repairing (replacing) faulty units while operating in a gracefully degraded fault-tolerant manner. The algorithm allows the on-line reentry of repaired (replaced) units, and deals effectively with synchronization and time stamping problems. We define an active set to be a set of fault-free processing elements that agree on a unified diagnosis. The algorithm we propose, would enable each healthy processor of reliably identifying the largest possible active set to which it belongs. If each processor refrains from further dealings with units not in its active set, then logical reconfiguration can be achieved. Concurrent with this on-line fault isolation, analyzers in the system carry out detailed diagnosis to locate the faulty units, and upon a proper report, the associated controllers can perform the required repair/replace. Later, recovering units will be allowed to reenter the system. The algorithm is shown to be robust in that it guarantees the survival of any possible active set. In particular, if there are no more than {\it t} faults in the system, the algorithm guarantees that all fault-free processors will eventually identify the fault pattern, provided that the system is {\it t self diagnosable with without repair} . Furthermore, if the system is {\it t self diagnosable with repair,} then the algorithm guarantees that at least one controller will be able to repair/replace (at least) one faulty unit.
Milena Mihail. 1988. ``On Coupling and the Approximation of the Permanent.''
We discuss in detail the limitations of the coupling technique for a Markov Chain on a population with non-trivial combinatorial structure namely, perfect and near perfect matchings of a dense bipartite graph.
Donald Beaver. 1988. ``Distributed Non-Cryptographic Oblivious Transfer with Constant Rounds Secret Function Evaluation.''
We develop tools for distributively and secretly computing the {\it parity} of a hidden value in a {\it constant number of rounds} without revealing the secret argument or result, improving the previous $O(log \; n)$ round requirement, and we apply them to a {\it generalization of Oblivious Transfer} to the distributed environment. Oblivious Transfer, in which Alice reveals a bit to Bob with 50-50 probability, but without knowing whether or not Bob received it, has found diverse applications. In the case where Alice and Bob are alone, and they cannot trust each other, all known solutions require extensive cryptography. If, however, there exist other trustworthy parties around, we show that oblivious transfer can be performed without any cryptography, even if nobody knows who the trustworthy parties are. It suffices that at most a third or a half of the participants are dishonest. Using previously developed methods for distributed secret computation, we give a solution requiring $O(log \; n)$ rounds of interaction. We then give novel techniques to reduce the number of rounds to a constant. These new techniques include computing a random 0-1 secret, taking the multiplicative inverse of a secret, and normalizing a secret (without revealing the result) in a constant expected number of rounds. Such tools allow arbitrary functions over a polynomially sized domain of arguments to be evaluated quickly and efficiently. The methods developed for this paper provide a basis for solutions to other problems in distributed secret computation.
Michael Kearns and Leslie G. Valiant. 1988. ``Learning Boolean Formulae or Finite Automata is as Hard as Factoring.''
We prove that the problem of inferring from examples Boolean formulae or deterministic finite automata is computationally as difficult as deciding quadratic residuosity, inverting the RSA encryption function and factoring Blum integers (composite number $p dot q$ where $p$ and $q$ are primes both congruent to 3 {\it mod} 4). These results are for the distribution-free model of learning. They hold even when the inference task is that of deriving a probabilistic polynomial time classification algorithm that predicts the correct value of a random input with probability at least $\frac {1} {2} + \frac {1} {p(s)} ,$ where $s$ is the size of the Boolean formula or deterministic finite automaton, and $p$ is any polynomial.
Daniel David Krizanc. 1988. ``Merging and Routing on Parallel Models of Computation.''
The term ``parallel computation'' encompasses a large and diverse set of problems and theory. In this thesis we use it to refer to any computation which consists of a large collection of tightly coupled synchronized processors working together to solve a terminating computational problem. Models of such computation capture to varying degrees properties of existing and proposed parallel machines. Our main concerns here are with how the models deal with the communication between processors and what effect the introduction of randomness has on the models. The main results of the thesis are: (a) a lower bound on the average time required by a parallel comparison tree (PCT) to merge two lists (b) a tight tradeoff between the running time, the probability of failure and the number of independent random bits used by an oblivious routing strategy for a fixed connection network (c) a similar tradeoff for the problem of finding the median on the PCT model and (d) a tradeoff between the amount of storage required by the nodes of a network and the deterministic time complexity of a class of oblivious routing strategies for the network.
Azer Bestavros and William McKeeman. 1988. ``Parallel Bin Packing Using First-fit and K-delayed Best-fit Heuristics.''
In this paper, we present and contrast simulation results for the {\it Bin Packing} problem. We used the Connection Machine to simulate the asymptotic behavior of a variety of packing heuristics. Several parallel algorithms are considered, presented and contrasted in the paper. The seemingly serial nature of the bin packing simulation has prohibited previous experimentations from going beyond sizes of several thousands bins. We show that by adopting a fairly simple data parallel algorithm a speedup by a factor of $N$ over straightforward serial implementation is possible. Sizes of up to hundreds of thousands of bins have been simulated for different parameters and heuristics.
Robert J. Walsh. 1988. ``Ray Tracing for Massively-Parallel Machines.''
Proper data-type modeling for ray tracing on massively-parallel machines is presented as a basis for parallel graphics algorithm construction. A spatial enumeration algorithm is developed, discussed, modeled, and evaluated. The spatial enumeration evaluation is used to motivate an octree algorithm appropriate for parallel machines. These algorithms are suitable for any graphics problem which can be solved using rays, including radiosity. While the focus is computer graphics, general-purpose, massively- parallel machine architecture is discussed sufficiently to support the analysis and to generate some ideas for machine design.
Robert J. Walsh. 1988. ``A New Scan-Line Algorithm for Massively-Parallel Machines.''
This paper describes how to rapidly generate a synthetic image using a new rendering algorithm for general-purpose, massively- parallel architectures. The presentation shows how algorithm design decisions were made and how to customize the algorithm to a particular architecture. Alternative algorithms are analyzed qualitatively and quantitatively to demonstrate the superiority of the new algorithm.
Donald Beaver. 1988. ``Secure Multiparty Protocols Tolerating Half Faulty Processors.''
We show how any function of $n$ inputs can be evaluated by a complete network of $n$ processors, revealing no information other than the result of the function, and tolerating up to $t$ maliciously faulty parties for $2t < n$. We demonstrate a resilient method to multiply secret values without using cryptography. The crux of our method is a new, non-cryptographic zero-knowledge technique by which a single party can secretly share values $a_ {1} , dots , a_m$ along with another secret $B = P(a_ {1} , dots, a_m)$, where $P$ is an arbitrary function; and by which the party can prove to all other parties that $B = P(a_ {1} , dots, a_ {m} )$, without revealing $B$ or any other information. Using this technique we give a protocol for multiparty private computation that improves the bound established by previous results, which required $3t < n$. Our protocols allow an exponentially small chance of error, but are provably optimal in their resilience against Byzantine faults.
Abdelsalam Abdelhamid Heddaya. 1988. ``Managing Event-based Replication for Abstract Data Types in Distributed Systems.''
Data replication enhances the availability of data in distributed systems. This thesis deals with the management of a particular representation of replicated data objects that belong to Abstract Data Types (ADT). Traditionally, replicated data objects have been stored in terms of their {\em states} , or {\em values} . In this thesis, I argue for the viability of state transition {\em histories,} or {\em logs} , as a more suitable storage representation for abstract data types in distributed computing environments. We present two main contributions: a new protocol for reducing message and storage requirements of histories, and a novel reconfiguration and recovery method. In the first protocol, we introduce the notion of {\em two phase gossip} as the primary mechanism for managing distributed replicated event histories. We focus our second protocol for reconfiguration and recovery on enhancing the availability of distributed objects in the face of sequences of failures. Additionally, our reconfiguration protocol supports system administration functions related to the storage of distributed objects. In combination, the two protocols that we propose demonstrate the viability and desirability of the distributed representation of an ADT object as a history of the state transitions that the data object undergoes, rather than as the value or the sequence of values that it assumes.