Posts

Showing posts from 2012

Interactivity, week 3

Image
Last week  I discussed some newly-explored proof-theoretic relationships between reasoning about process calculi and substructural logics, and vaguely mentioned some other programming models that I wanted to compare. Since then, I think I've come up with a way to phrase my research objective with this project:  to relate the programming and reasoning methodologies surrounding interactivity through a common medium of proof theory. That's not a thesis statement, but I figure it's better than "*waves hands excitedly* interactivity! Processes! Linear logic! Focusing!" Anyway. What I've done so far has given me more questions than answers, but I like the questions a lot. The first of these questions I want to discuss in this post is "How can proof theory capture an appropriate notion of broadcast  message replication?" To explain what I mean by that, I'll describe an example which I think captures some common interactive communication patterns.

Interactivity, week 2

My post about week 1 established a new research project I'm working on involving programming interactivity with substructural logic. This past week, I've been working on digesting some background and related work. One thing I did was spend a good long stretch working through the core ideas in Yuxin, Iliano, and Rob's paper  Relating Reasoning Methods in Linear Logic and Process Algebra . Linear contexts as processes The point of the paper is to establish that a so-called "logical preorder" on contexts \[ \Delta \preceq_l \Delta' \] defined (I'm simplifying to the strictly linear case 1 ) as \[ {For}\ {all}\ C, \Delta \vdash C \ {implies}\  \Delta' \vdash C \] where \[ \vdash \] is entailment in linear logic, corresponds to traditional notions of contextual preorder  in CCS/process algebras, which are defined coinductively by a reduction relation extended over a laundry list of structure-preserving properties. The symmetric closure of th

Programming interactivity with substructural logic

This will be the first of four posts in which I attempt to organize my thoughts about an idea for a project. It's related to  logic programming interactive fiction  and the  OBT talk  I gave this past January, but I'm trying to extend the ideas beyond interactive fiction -- which represents one particular interaction pattern -- into more general process-calculus-ish terms. Rob Simmons, who just defended a thesis that I take a great deal of inspiration from, has a  good primer on substructural logic programming  over on his blog. To review briefly, the key concepts I'm building on are: logic programming , in which a program is specified by declaring several atomic base types and a collection of logical propositions describing how they relate to one another, and a particular proof search strategy determines the semantics of its execution; forward-chaining  or bottom-up  logic programming, a particular proof search (i.e. execution strategy) where a collection of base fa

#TWIL: monads, parallelism, geometry

It's been a high-input, low-output sort of week. In the interest of accountability and perhaps higher retention, I'll try to summarize some of the things I've learned. Themes include monads, parallelism, and geometry. :) Umut Acar on Dynamic Parallel Programming Umut describes two ideas from CMU theses with which I'm familiar, parallel cost semantics using computation graphs  (Daniel Spoonhower) and self-adjusting computation  ( Umut himself , though I'm familiar with it through Ruy Ley-Wild). In the former, one can analyze the parallelism of an algorithm as a function of the dependencies between computations; the algorithm is characterized in terms of work, the  total cost, and depth, the  length of the longest critical path through the dependency graph. Bob Harper wrote a nice post summarizing these ideas . With self-adjusting (or dynamic ) computation, the idea is that you can re-run a program on slightly-changed inputs and, if the problem is stable  (small i

Lambda hackulus: machinery for Bohm's theorem

I've been distracting myself recently by doing some untyped-lambda-calculus hackery, so I may as well write it up, because I don't think I've seen this particular kind of hackery described elsewhere, and I'm trying to get better at day-to-day documentation. It's kind of cute and probably not very profound, but if anyone sees connections to other work, please point it out. The context is here is that I was attempting to understand Bohm's Theorem (a la [1] or [2]), and Twelf is my usual tool for understanding things about lambda calculi. So I started with the bog-standard Twelf encoding of untyped lambda calculus: exp : type. lam : (exp -> exp) -> exp. app : exp -> exp -> exp. ...with the goal in mind of eventually formalizing Bohm's Theorem. (I do still have that goal in mind, by the way, and will hopefully post about it when completed.) The puzzle is how to write programs over such terms that implement the various meta-level functions

As if people matter

I've been wanting to respond to this essay lamenting the disconnect between PL popularity and PL research  since it started making the rounds earlier this week, and I've wanted to write a post about what I think PL theory and interaction design have to do with one another for even longer ; possibly I can tie these thoughts together in a single post. [WARNING: much fuzzy speculation that lack concrete substantiation herein, but I wanted to put these thoughts together before they dissolve into my subconsciousness any further...] "Programming languages as if people matter" The larger context for these thoughts is the ongoing battle I have been mentally cataloging for years (and which certainly predates my involvement in academic research, so I'm bound to say some misinformed things -- please correct me!), the frequent and frustrating friction between researchers in type theory and logic (in my terms "PL theory") and those who criticize such work for foc