Rigorous estimation of maximum floating-point round-off errors is an important capability central to many formal verification tools. Unfortunately, available techniques for this task often provide very pessimistic overestimates, causing unnecessary verification failure. Furthermore, there are no available rigorous approaches that handle transcendental functions other than by analyzing truncated series expansions thereof. We have developed a new approach called Symbolic Taylor Expansions that avoids these problems, and implemented a new tool called FPTaylor embodying this approach. Key to our approach is the use of rigorous global optimization, instead of the more familiar interval arithmetic, affine arithmetic, and/or SMT solvers. In addition to providing far tighter upper bounds of round-off error in a vast majority of cases, FPTaylor also emits per-instance analysis certificates in the form of HOL Light proofs that can be machine checked. In this paper, we present details of Symbolic Taylor Expansions, and comparisons with other rigorous approaches. We also release FPTaylor along with our benchmarks for evaluation.
We consider the ultimate limits of program-specific garbage collector (GC) performance for real programs. We first characterize the GC schedule optimization problem. Based on this characterization, we develop a linear-time dynamic programming solution that, given a program run and heap size, computes an optimal schedule of collections for a non-generational collector. Using an analysis of a heap object graph of the program, we compute a property of heap objects that we call their pre-birth time. This information enables us to extend the non-generational GC schedule problem to the generational GC case in a way that also admits a dynamic programming solution with cost cubic in the length of the trace (number of objects allocated). This improves our previously reported approximately optimal result. Our experimental results on traces from Java programs of the DaCapo benchmark suite show that there is considerable promise to reduce garbage collection costs for some programs by developing program-specific collection policies. For a given space budget, optimal schedules often obtain modest but useful time savings, and for a given time budget, optimal schedules can obtain considerable space savings.
In security-typed programming languages, types statically enforce noninterference between potentially conspiring values, such as the arguments and results of functions. But to adopt static security types, like other advanced type disciplines, programmers face a steep wholesale transition, often forcing them to refactor working code just to satisfy their type checker. To provide a gentler path to security typing that supports safe and stylish but hard-to-verify programming idioms, researchers have designed languages that blend static and dynamic checking of security types. Unfortunately the resulting languages only support static, type-based reasoning about noninterference if a program is purely statically secured. This limitation substantially weakens the benefits that dynamic enforcement bring to static security typing. In this paper we present GSLref, a gradual security-typed higher-order language with references, which lets programmers use types to reason statically about noninterference in all programs, even those that enforce security dynamically. We prove that GSLref satisfies all but one of Siek et al.'s refined criteria for gradually-typed languages, which ensure that programs can seamlessly combine and transition between simple typing and security typing. It remains an open question whether such a language could satisfy Siek et al.'s dynamic gradual guarantee. To realize this design, we were led to acknowledge a sharp distinction between syntactic type safety and semantic type soundness, each of which constrains the design of the gradual language.
Some interesting communication protocols can be precisely described only by context-free session types, an extension of conventional session types supporting a general form of sequential composition. The complex metatheory of context-free session types, however, hinders the de nition of corresponding checking and inference algorithms. In this work we address and solve these problems introducing a new syntax-directed type system for context-free session types of which we provide two OCaml embeddings.
Constructing a high-performance garbage collector is hard. Constructing a fully concurrent on-the-fly (OTF), compacting collector is much more so. We describe our experience of implementing the Sapphire algorithm as the first on-the-fly, parallel, replication copying, garbage collector for the Jikes RVM Java virtual machine. In part, we explain our innovations such as copying with hardware and software transactions, OTF management of Javas reference types and simple, yet correct, lock-free management of volatile fields in a replicating collector. We fully evaluate, for the first time, and using realistic benchmarks, Sapphires performance and suitability as a low latency collector. An important contribution of this work is a detailed description of our experience of building an OTF copying collector for a complete JVM with some assurance that it is correct. A key aspect of this is model checking of critical components of this complicated and highly concurrent system.
Dependent types are a key feature of the type systems used by the proof assistants based on the intuitionistic Curry-Howard correspondence. On the other hand, this correspondence can be extended to classical logic provided the language of proofs is enriched with control operators. Alas, control operators are known to misbehave in the presence of dependent types, unless dependencies are restricted to values. Besides, while sequent calculi smoothly support abstract machine and continuation-passing style interpretations, there is no such presentation of a language with dependent types. The main achievement of this paper is to give a sequent calculus presentation of a call-by-value language with a control operator and dependent types, and to justify its soundness through a continuation-passing style translation. We start from the call-by-value version of the »µµ~-language and design a minimal language with a value restriction and a type system that includes a list of explicit dependencies and maintains type safety. We then show how to relax the value restriction and introduce delimited continuations to directly prove the consistency by means of a continuation-passing-style translation. Finally, we relate our calculus to a similar system by Lepigre, and present a methodology to transfer properties from this system to our own.
While high-level languages come with significant readability and maintainability benefits, their performance remains difficult to predict. For example, programmers may unknowingly use language features inappropriately, which cause their programs to run slower than expected. To address this issue, we introduce feature-specific profiling}, a technique that reports performance costs in terms of linguistic constructs. These profilers help programmers find expensive uses of specific language features. We describe the architecture of such a profiler, explain prototypes for two languages, and thus provide evidence for the idea's general usefulness as a performance debugging tool.
We present Armus, a verification tool for dynamically detecting or avoiding barrier deadlocks. The core design of Armus is based on phasers, a generalisation of barriers that supports split-phase synchronisation, dynamic membership, and optional-waits. This allows Armus to handle the key barrier synchronisation patterns found in modern languages and libraries. We implement Armus for X10 and Java, giving the first sound and complete barrier deadlock verification tools in these settings. Armus introduces a novel event-based graph model of barrier concurrency constraints that distinguishes between task-event and event-task dependencies. Decoupling these two kinds of dependencies facilitates the verification of distributed barriers with dynamic membership, a challenging feature of X10. Our base representation can dynamically switch between a task-to-task model, Wait-for Graph (WFG), and an event-to- event model, State Graph (SG), to improve the scalability of the analysis. Formally, we show that the verification is sound and complete with respect to the occurrence of deadlock in our core phaser language; and that switching graph representations preserves the soundness and completeness properties. These results are machine checked with the Coq proof assistant. Practically, we evaluate the runtime overhead of our implementations using three benchmark suites in local and distributed scenarios. Regarding deadlock detection, distributed scenarios show negligible overheads and local scenarios show overheads below 1.15×. Deadlock avoidance is more demanding, and highlights the potential gains from dynamic graph selection. In one benchmark scenario, the runtime overheads vary from: 1.8× for dynamic selection, 2.6× for SG-static selection, and 5.9× for WFG-static selection.
We present a rich type system with subtyping for an extension of System F. Our type constructors include sum and product types, universal and existential quantifiers, inductive and coinductive types. The latter two may carry annotations allowing the encoding of size invariants that are used to ensure the termination of recursive programs. For example, the termination of quicksort can be derived by showing that partitioning a list does not increase its size. The system deals with complex programs involving mixed induction and coinduction, or even mixed polymorphism and (co-)induction (as for Scott-encoded data types). One of the key ideas is to completely separate the notion of size from recursion. We do not check the termination of programs directly, but rather show that their (circular) typing proofs are well-founded. We then obtain termination using a standard semantic proof of normalisation. To demonstrate the practicality of our system, we provide an implementation which accepts all the examples discussed in the paper.
In multiparty session types, interconnection networks identify which roles in a session engage in communication (i.e. two roles are connected if they exchange a message). In session-based interpretations of linear logic the analogue notion corresponds to determining which processes are composed, or cut, using compatible channels typed by linear propositions. In this work we show that well-formed interactions represented in a session-based interpretation of classical linear logic (CLL) form strictly less expressive interconnection networks than those of a multiparty session calculus. To achieve this result we introduce a new compositional synthesis property dubbed partial multiparty compatibility (PMC), enabling us to build a global type denoting the interactions obtained by iterated composition of well-typed CLL threads. We then show that CLL composition induces PMC global types without circular interconnections between three (or more) participants. PMC is then used to define a new CLL composition rule which can form circular interconnections but preserves the deadlock-freedom of CLL.