Winter 2025 Seminars

Winter 2025

Organizers: Ian Kariniemi, Laura Israel

All Talks

Date Title Speaker
Bluebell: An Alliance of Relational Lifting and Independence For Probabilistic Reasoning Jialu Bao
Formally Verified Path ORAM Hannah Leung
A Verified Foreign Function Interface between Coq and C Joomy Korkut
Dependable Computing Alain Kägi
Cultivating Performance with types and without Artem Pelenitsyn
Guided Tour Through the optics Library Katie Casamento
Inference for applicative functors in DSLs Brent Yorgey
Neurosymbolic Approaches to Safe Reinforcement Learning Greg Anderson

Bluebell: An Alliance of Relational Lifting and Independence For Probabilistic Reasoning

Speaker: Jialu Bao

Jialu Bao is a final-year computer science Ph.D. student at Cornell, working on the verification of probabilistic programs. During the last few years, her main research focus has been in probabilistic separation logic, with work in both the theoretical side (e.g., categorical models) and the practical side (e.g., more expressive logic, automations); besides that, she has also worked on generation of probabilistic loop invariants, which led to a Distinguished Paper Award at CAV 2022. Outside of research, she enjoys tennis, biking and discovering new things to do in Ithaca.

Abstract

We present Bluebell, a program logic for reasoning about probabilistic programs where unary and relational styles of reasoning come together to create new reasoning tools. So far, the two styles of reasoning have largely remained separate in the many program logics designed for the deductive verification of probabilistic programs. In Bluebell, we unify these styles of reasoning through the introduction of a new modality called "joint conditioning" that can encode and illuminate the rich interaction between conditional independence and relational liftings; the two powerhouses from the two styles of reasoning.

Formally Verified Path ORAM

Speaker: Hannah Leung

Hannah Leung is currently pursuing her Ph.D. degree at University of Illinois-Urbana Champaign, where she works with Dr. Talia Ringer and Dr. Christopher Fletcher(UC, Berkeley). She's broadly interested in formal verification, programming languages, security and applying formal methods to prove the security of critical systems. She's presented her work at CoqPL.

Abstract

Proving the correctness of complex systems that are imperative in nature is challenging. Often we need to come up with clever inductive invariants to reason about interesting properties. We may also need specialized frameworks to reason about these systems modularly. We develop such a framework for reasoning about an implementation of a security primitive called Path Oblivious RAM (Path ORAM). Path ORAM is a kind of RAM that protects user privacy by preventing information leakage through memory access patterns, while simultaneously imposing little performance overhead for its security guarantees. Our framework for reasoning about Path ORAM views sequences of RAM accesses---that is, read or write operations---as programs. It then includes a specialized program logic, which we call Access Logic, for reasoning about properties that hold before and after said access sequences.We formalize Path ORAM in Rocq, and use our embedded Access Logic to prove functional correctness, informally, that Path ORAM provides all of the correct functionality users can expect from a RAM, as well as the security property that Path ORAM provides.

A Verified Foreign Function Interface between Coq and C

Speaker: Joomy Korkut

Joomy Korkut is a researcher at Bloomberg's CTO Infrastructure and Security Research team in New York City, where he works on formal methods. His research interests include interactive theorem provers, metaprogramming, property-based testing, and separation logic. He holds a Ph.D. (2024) in computer science from Princeton University.

Abstract

One can write dependently typed functional programs in Coq, and prove them correct in Coq; one can write low-level programs in C, and prove them correct with a C verification tool. We demonstrate how to write programs partly in Coq and partly in C, and interface the proofs together. The Verified Foreign Function Interface (VeriFFI) guarantees type safety and correctness of the combined program. It works by translating Coq function types (and constructor types) along with Coq functional models into VST function-specifications; if the user can prove in VST that the C functions satisfy those specs, then the C functions behave according to the user-specified functional models (even though the C implementation might be very different) and the proofs of Coq functions that call the C code can rely on that behavior. To achieve this translation, we employ a novel, hybrid deep/shallow description of Coq dependent types.

Dependable Computing

Speaker: Alain Kägi

Alain Kägi is an assistant professor of computer science at Lewis & Clark College. Prior to joining academia he worked many years at Intel Corporation.

Abstract

We are increasingly reliant on computers. Today's commerce seems unavoidably tied to computing systems. It is conceivable that, in a few year's time, sectors like transportation and healthcare may not function without computers. Yet, almost every day brings news of a cyber incident highlighting the vulnerabilities inherent in our reliance on technology. This state of affairs leads some to argue that we must rewrite much software with better discipline and verification to the greatest extent possible. In this talk, I'll describe a minimal, high-performance, pure IPv6 networking stack we have built from the ground up along with steps we're taking to prove its implementation matches established specifications.

Cultivating Performance with types and without

Speaker: Artem Pelenitsyn

Artem Pelenitsyn is a postdoc at Purdue University (USA) working with Milind Kulkarni on topics such as the Gibbon tree traversals compiler, GPU RT Cores programming model, and sparse tensor compilers (since 2023). Before that, Artem worked as a research assistant at Czech Technical University (2017) and completed a PhD at Northeastern University (USA) with Jan Vitek, working on the Julia language's subtyping relation and JIT compilation strategy (2018–2023). Before that, Artem worked as a teaching faculty at Southern Federal University, SFedU, (Russia) where, besides teaching courses like Introduction to Programming, Functional Programming, and Computer Architecture, he supervised students on topics in functional programming (2011–2016). Artem obtained his MSc in CS (2009) and BSc in CS (2007) from SFedU. Artem likes it when types facilitate compiling efficient programs and will get on the academic job market in the upcoming 2025–2026 cycle.

Abstract

While PL theorists use types to ensure certain properties of languages, compiler engineers employ types to produce fast programs. In this talk, I present three etudes in the latter. First, I introduce the Gibbon compiler, which turns general tree traversals written in a (strict variant of) micro Haskell into efficient C programs by challenging the standard approach to compiling algebraic datatypes. I also show our recent Gibbon development that achieves even better performance by automatically adjusting the memory layout of datatypes to reflect the algorithms in your program. Second, I turn the focus to a dynamically typed language Julia, which uses type-based dynamic dispatch to express the need of computational scientists in ad-hoc polymorphism. Normally, dynamic dispatch impedes performance, but Julia’s clever JITted semantics compile it away when the user’s code exhibits so-called type stability. Lastly, I invite the audience to a typeless land of GPU programming and discuss how recently developed GPU ray-tracing cores optimize tree traversals of certain kinds. I describe some of the novel non-graphics applications of RT cores we worked on and invite the audience to brainstorm about making this hostile land more typeful.

Guided Tour Through the optics Library

Speaker: Katie Casamento

Inference for applicative functors in DSLs

Speaker: Brent Yorgey

Brent Yorgey is an associate professor in the department of Mathematics & Computer Science at Hendrix College in Conway, Arkansas, USA. His research interests lie broadly within the fields of programming languages and discrete mathematics, with particular interests in functional programming languages, embedded domain-specific languages, type theory, combinatorics, and education. He can also occasionally be found playing the piano, reading speculative fiction, and trying not to get injured while mountain biking.

Abstract

McBride and Paterson's idioms, or applicative functors, are a well-known abstraction defining function application in some ambient effectful context. It would often be useful, especially in the context of designing domain-specific languages, to infer such "idiomatic'' application, allowing the programmer to use plain function application syntax --- but only when doing so is unambiguous. I will present work in progress characterizing the conditions necessary for this to be possible.

Neurosymbolic Approaches to Safe Reinforcement Learning

Speaker: Greg Anderson

Greg Anderson works at the intersection of programming languages and machine learning, focusing on techniques for improving the safety and robustness of deep learning systems. Greg is especially interested in neurosymbolic learning, a field of research based on blending deep neural networks with traditional, structured programs to get the best aspects of both worlds. In addition, he also works on algorithms for verifying neural networks directly as well as synthesizing traditional programs. Prior to joining the faculty at Reed, Greg earned a B.S. in Computer Engineering and Mathematics from the University of Virginia and a Ph. D. in Computer Science from the University of Texas at Austin.

Abstract

In recent years, deep reinforcement learning (RL) has become a powerful tool for solving a variety of challenging problems. However it is impossible to predict the behavior of many RL algorithms during training, which hampers their adoption in safety-critical tasks. In this talk, we will look at how to solve this problem via neurosymbolic learning, a field of machine learning which combines gradient-based neural training with symbolic reasoning. This combination allows us to maintain the performance benefits of deep learning while incorporating guarantees of safety from traditional program verification. Specifically, we will discuss two neurosymbolic learning algorithms, REVEL and SPICE, which can enforce safety under different assumptions about the underlying RL problem.