State-of-the-art provenance systems accumulate data over time, creating deep lineage trees. When queried for the lin- eage of an object, these systems can return excessive results due to the longevity and depth of their provenance. Such a query is underspecified: it does not constrain its result to a finite span of history. Unfortunately, specifying queries correctly often requires in-depth knowledge of the data set. We address the problem of underspecified lineage queries on provenance with techniques inspired by Web search. We present two metrics, SubRank and ProvRank, that measure the frequency of a particular result across the space of all possible lineage queries. We then use these metrics to de- fine a subset of the lineage with which to respond to a query. These metric-defined result sets closely approximate a user’s conceptual view of relevant history. We evaluate our tech- niques on diverse workflows ranging from Wikipedia revision data to fMRI processing.
In systems that handle confidential information, the security policy to enforce on information frequently changes: new users join the system, old users leave, and sensitivity of data changes over time. It is challenging, yet important, to specify what it means for such systems to be secure, and to gain assurance that a system is secure.
We present a language-based model for specifying, reasoning about, and enforcing information security in systems that dynamically change the security policy. We specify security for such systems as a simple and intuitive extensional knowledgebased semantic condition: an attacker can only learn information in accordance with the current security policy.
Importantly, the semantic condition is parameterized by the ability of the attacker. Learning is about change in knowledge, and an observation that allows one attacker to learn confidential information may provide a different attacker with no new information. A program that is secure against an attacker with perfect recall may not be secure against a more realistic, weaker, attacker.
We introduce a compositional model of attackers that simplifies enforcement of security, and demonstrate that standard information-flow control mechanisms, such as security-type systems and information-flow monitors, can be easily adapted to enforce security for a broad and useful class of attackers.
A great deal of research on sanitizer placement, sanitizer correctness, checking path validity, and policy inference, has been done in the last five to ten years, involving type systems, static analysis and runtime monitoring and enforcement. However, in pretty much all work thus far, the burden on sanitizer placement has fallen on the developer. However, sanitizer placement in large-scale applications is difficult, and developers are likely to make errors, and thus create security vulnerabilities.
This paper advocates a radically different approach: we aim to fully automate the placement of sanitizers by analyzing the flow of tainted data in the program. We argue that developers are better off leaving out sanitizers entirely instead of trying to place them.
This paper proposes a fully automatic technique for sanitizer placement. Placement is static whenever possible, switching to run time when necessary. Run-time taint tracking techniques can be used to track the source of a value, and thus apply appropriate sanitization. However, due to the runtime overhead of run-time taint tracking, our technique avoids it wherever possible.
We demonstrate that a practical concurrent language can be extended in a natural way with information security mechanisms that provably enforce strong information security guarantees. We extend the X10 concurrent programming language with coarse-grained information-flow control. Central to X10 concurrency abstractions is the notion of a place: a container for data and computation. We associate a security level with each place, and restrict each place to store only data appropriate for that security level. When places interact only with other places at the same security level, then our security mechanisms impose no restrictions. When places of differing security levels interact, our information security analysis prevents potentially dangerous information flows, including information flow through covert scheduling channels. The X10 concurrency mechanisms simplify reasoning about information flow in concurrent programs.
We present a static analysis that enforces a noninterference- based extensional information security condition in a calculus that captures the key aspects of X10’s place abstraction and async-finish parallelism. We extend this security analysis to support many of X10’s language features, and have implemented a prototype compiler for the resulting language.
In this report, we describe a fast deconvolution approach for color images that combines a sparse regularization cost on the magnitudes of gradients with constraints on their direction in color space. We form these color constraints in a way that allows retaining the computationally-efficient optimization strategy introduced in recent deconvolution methods based on half-quadratic splitting. The proposed algorithm is capable of handling a different blur kernel in each color channel, and is used for per-layer deconvolution in our paper: “Depth and Deblurring from a Spectrally-varying Depth-of-Field” . A MATLAB implementation of this method is available at http://vision.seas.harvard. edu/ccap, and takes roughly 20 seconds to deconvolve a three-channel 1544 x 1028 color image, on a Linux-based Intel I-3 2.1GHz machine.