Recent image-based rendering techniques have shown success in approximating detailed models using sampled images over coarser meshes. One limitation of these techniques is that the coarseness of the geometric mesh is apparent in the rough polygonal silhouette of the rendering. In this paper, we present a scheme for accurately capturing the external silhouette of a model in order to clip the approximate geometry.
Given a detailed model, silhouettes sampled from a discrete set of viewpoints about the object are collected into a silhouette map. The silhouette from an arbitrary viewpoint is then computed as the interpolation from three nearby viewpoints in the silhouette map. Pairwise silhouette interpolation is based on a visual hull approximation in the epipolar plane. The silhouette map itself is adaptively simplified by removing views whose silhouettes are accurately predicted by interpolation of their neighbors. The model geometry is approximated by a progressive hull construction, and is rendered using projected texture maps. The 3D rendering is clipped to the interpolated silhouette using stencil planes.
Much recent work on boolean satisfiability has focussed on incomplete algorithms that sacrifice accuracy for improved running time. Statistical predictors of satisfiability do not return actual satisfying assignments, but at least two have been developed that run in linear time. Search algorithms allow increased accuracy with additional running time, and can return satisfying assignments. The efficient search algorithms that have been proposed are based on iteratively improving a random assignment, in effect searching a graph of degree equal to the number of variables. In this paper, we examine an incomplete algorithm based on searching a standard binary tree, in which statistical predictors are used to speculatively prune the tree in constant time. Experimental evaluation on hard random instances shows it to be the first practical incomplete algorithm based on tree search, surpassing even graph-based methods on smaller instances.
Computer system performance is a measure of how well the operating system shares hardware and software resources among the various applications that are running on it. The goal of performance monitoring in the VINO extensible operating system is to make recommendations for improving application performance. This is accomplished by collecting system data through a monitoring agent, automatically identifying conditions causing performance degradation, and presenting evidence to support its conclusions. Once the operating system is well monitored, it may be able to tune itself to improve system performance.
Within the framework of VINO, I describe a system that monitors itself and gathers information about its performance. I show that system monitoring is advantageous for two reasons. First, through the use of two user applications, it is capable of warning designers of application bottlenecks and system degradation. Second, the operating system can dynamically self-adapt its own kernel behavior and policies after monitoring access patterns. Thus, the monitoring system aids the application designer in defining performance limitations and adapts kernel policies to improve overall system performance.
With growing opportunities for individually motivated agents to work collaboratively to satisfy shared goals, it becomes increasingly important to design agents that can make intelligent decisions in the context of commitments to group activities. In particular, agents need to be able to reconcile their intentions to do group-related actions with other, conflicting actions. In this thesis, I present the framework for the SPIRE experimental system that allows the process of intention reconciliation in team contexts to be simulated and studied. I define a measure of social consciousness and show how it can be incorporated into the SPIRE system. Using SPIRE, I then investigate the effect of infinite and limited time horizons on agents with varying levels of social consciousness, as well as the resulting effect on the utility of the group as a whole. Using these experiments as a basis for theoretic conclusions, I suggest preliminary principles for designers of collaborative agents.
The UNIX Fast File System (FFS) is probably the most widely-used file system for performance comparisons. However, such comparisons frequently overlook many of the performance enhancements that have been added over the past decade. In this paper, we explore the two most commonly used approaches for improving the performance of meta-data operations and recovery: logging and Soft Updates.
The commercial sector has moved en masse to logging file systems, as evidenced by their presence on nearly every server platform available today: Solaris, AIX, Digital UNIX, HP-UX, Irix and Windows NT. On all but Solaris, the default file system uses logging. In the meantime, Soft Updates holds the promise of providing stronger reliability guarantees than logging, with faster recovery and superior performance in certain boundary cases.
In this paper, we explore the benefits of both Soft Updates and logging, comparing their behavior on both microbenchmarks and workload-based macrobenchmarks. We find that logging alone is not sufficient to "solve" the meta-data update problem. If synchronous semantics are required (i.e., meta-data operatinos are durable once the system call returns), then the logging systems cannot realize their full potential. Only when this synchronicity requirement is relaxed can logging systems approach the performance of systems like Soft Updates. Our asynchronous logging and Soft Updates systems perform comparably in most cases. While Soft Updates excels in some meta-data intensive microbenchmarks, it outperforms logging on only two of the four workloads we examined and performs less well on one.
We investigate variations of a novel, recently proposed load balancing scheme based on small amounts of choice. The static (hashing) setting is modeled as a balls-and-bins process. The balls are sequentially placed into bins, with each ball selecting $d$ bins randomly and going to the bin with the fewest balls. A similar dynamic setting is modeled as a scenario where tasks arrive as a Poisson process at a bank of FIFO servers and queue at one for service. Tasks probe a small random sample of servers in the bank and queue at the server with the fewest tasks. /par Recently it has been shown that breaking ties in a fixed, asymmetric fashion, actually improves performance, whereas in all previous analyses, ties were broken randomly. We demonstrate the nature of this improvement using fluid limit models, suggest further improvements, and verify and quantify the improvement through simulations.
On the vast majority of today's computers, the dominant form of computation is GUI-based user interaction. In such an environment, the user's perception is the final arbiter of performance. Human-factors research shows that a user's perception of performance is affected by unexpectedly long delays. However, most performance-tuning techniques currently rely on throughput-sensitive benchmarks. While these techniques improve the average performance of the system, they do little to detect or eliminate response-time variabilities -- in particular, unexpectedly long delays.
We introduce a measurement methodology that improves user-perceived performance by helping us to identify and eliminate the causes of the unexpected long response times that users find unacceptable. We describe TIPME (The Interactive Performance Monitoring Environment), a collection of measurement tools that implements this methodology, and we present two case studies that demonstrate its effectiveness. Each of the performance problems we identify drastically affects variability in response time in a mature system, demonstrating that current tuning techniques do not address this class of performance problems.
We investigate a family of algorithms for graph bisection that are based on a simple local connectivity heuristic, which we call seed-growth. We show how the heuristic can be combined with stochastic search procedures and a postprocess application of the Kernighan-Lin algorithm. In a series of time-equated comparisons against large-sample runs of pure Kernighan-Lin, the new algorithms find bisections of the same or superior quality. Their performance is particularly good on structured graphs representing important industrial applications. An appendix provides further favorable comparisons to other published results. Our experimental methodology and extensive empirical results provide a solid foundation for further empirical investigation of graph-bisection algorithms.
The growing need for individually motivated agents to work collaboratively to satisfy shared goals has made it increasingly important to design agents that can make intelligent decisions in the context of commitments to group activities. Agents need to reconcile their intentions to do group-related actions with other, conflicting actions. We describe the SPIRE experimental system which allows the process of intention reconciliation in team contexts to be simulated and studied. We define a measure of social consciousness, discuss its incorporation into the SPIRE system, and present several experiments that investigate the interaction in decision-making of measures of group and individual good. In particular, we investigate the effect of infinite and limited time horizons, different task densities, and varying levels of social consciousness on the utility of the group and the individuals it comprises. A key finding is that an intermediate level of social consciousness yields better results than an extreme commitment. We suggest preliminary principles for designers of collaborative agents based on the results.
Proportional-share resource management is becoming increasingly important in today's computing environments. In particular, the growing use of the computational resources of central service providers argues for a proportional-share approach that allows clients to obtain resource shares that reflect their relative importance. In such environments, clients must be isolated from one another to prevent the activities of one client from impinging on the resource rights of others. However, such isolation limits the flexibility with which resource allocations can be modified to reflect the actual needs of clients. We present extensions to the lottery-scheduling resource-management framework that increase its flexibility while preserving its ability to provide secure isolation. To demonstrate how this extended framework safely overcomes the limits imposed by existing proportional-share schemes, we have implemented a prototype system that uses the framework to manage CPU time, physical memory, and disk bandwidth. We present the results of experiments that evaluate the prototype, and we show that our framework enables clients of central servers to achieve significant improvements in performance.
The rising popularity of thin client computing and multi-user, remote, graphical interaction recalls to the fore a range of operating system research issues long dormant, and introduces a number of new directions.
This paper investigates the impact of operating system design on the performance of thin client service. We contend that the key performance metric for this type of system is user-perceived latency and give a structured approach for investigating operating system design with this criterion in mind.
In particular, we apply our approach to a quantitative comparison and analysis of Windows NT, Terminal Server Edition (TSE) and Linux with the X Windows System, two popular implementations of thin client service.
We find that the processor and memory scheduling algorithms in both operating systems are not tuned for thin client service. Under heaavy CPU and memory load, we observed user-perceived latencies of up to 100 times beyond the threshold of perception and even in the idle state these systems induce unneccessary latency. TSE performs particularly poorly despite scheduler modifications to improve interactive responsiveness. We also show that TSE's network protocol outperforms X by up to six times, and also makes use of a bitmap cache which is essential for handling dynamic elements of modern user interfaces and can reduce network load in these cases by up to 2000%.