Public-use earth science datasets are a useful resource with the unfortunate feature that their provenance is easily disconnected from their content. “Fair-use policies” typically associated with these datasets require appropriate attribution of providers by users, but sound and complete attribution is difficult if provenance information is lost. To address this we introduce a technique to directly associate provenance information with sensor datasets. Our technique is similar to traditional watermarking but is intended for application to unstructured time-series datasets. Our approach is potentially imperceptible given sufficient margins of error in datasets, and is robust to a number of benign but likely transformations including truncation, rounding, bit-flipping, sampling, and reordering. We provide algorithms for both one-bit and blind mark checking, and show how our system can be adapted to various data representation types. Our algorithms are probabilistic in nature and are characterized by both combinatorial and empirical analyses. Mark embedding can be applied at any point in the data lifecycle, allowing adaptation of our scheme to social or scientific concerns.
Object illumination and color are critical characteristics of a scene and being able to edit them allows artists to achieve powerful effects. Intrinsic image decomposition is the ideal component for this kind of tasks. By separating the illumination from the scene reflectance, it enables key operations such as recoloring and relighting. Significant progress has been done recently for decomposing static images. However, these algorithms rely on sophisticated optimization schemes that are computationally expensive and orders of magnitude too slow to be applied to video sequences. So much that even an optimized implementation would remain unpractical. In this paper, we introduce a user-guided algorithm that runs fast enough to be used in an interactive setting. Our strategy is to rely on an efficient sparse formulation – we also exploit the same kind of information as successful static methods but use it in ways that only have a minor impact on running time. The core of our approach is a gradient-domain \ell_2-\ell_p energy that models a sparse prior on reflectance gradients and a smooth prior on illumination. We show that the produced set of nonlinear equations can be solved very efficiently using look-up tables. Then, we provide scribbles to users to refine the decomposition. Our scribbles introduce local constraints in our optimization that add only a minimal overhead. Further, we extend these constraints to other similar image regions, thereby effectively enabling users to affect large regions with minimal effort. We also leverage multi-threading to precompute solutions a few frames ahead of the current one at a minimal cost. Coupled with the ability of our solver to use an initial guess to speed up convergence, this effectively shortens the computation time and offer a fast feedback to users. We demonstrate our approach on real sequences and show that we can obtain satisfying results with a reasonable amount of user interaction. We illustrate the benefits of our decomposition on video recoloring and shadow compositing.
We present PIDGIN, a program analysis and understanding tool that allows developers to explore the information flows that exist in programs and specify and enforce security policies that restrict these information flows. PIDGIN uses program-dependence graphs (PDGs) to precisely capture the information flows within a program. PDGs can be queried using a custom query language to explore and describe information flows in programs. A developer can specify strong information security policies by asserting that specific queries return no results (i.e., asserting the absence of certain information flows in the program). To check whether a program satisfies a security policy, a developer can simply evaluate the query against a program's dependence graph.
The query language is expressive, supporting a large class of precise, application-specific security guarantees. PIDGIN can be used to explore information security guarantees in legacy programs, or to support the specification, enforcement, and modification of information security requirements during program development.
We describe the design and implementation of PIDGIN and report on using PIDGIN both to explore security guarantees in existing open-source applications, and to specify and enforce security guarantees during application development.