Showing posts with label Functional. Show all posts
Showing posts with label Functional. Show all posts

Tuesday, April 18, 2023

Toyfl, finished at last

When I started this blog ten (!) years ago, I was creating a programming language called Toyl (Toy Language).

In the intervening years I have thrown away and rebuilt my language several times, and I now consider it essentially finished. It is now called Toyfl (Toy Functional Language).

Syntactically it is somewhat inspired by Javascript, with lambda expressions like the following:

(a, b, c) => a * b + c

And anonymous objects like this:

{a = 10; b = 32; c = 11; f = (x) => a}

And a dot operator for accessing members:

x = {a = 10; b = 32};

y = x.a

And lots of other familiar things.

There are only a few features of the language that are worth remarking on, but since the purpose of this blog was originally connected to developing the language, I'll remark on them here:

1) Extreme Laziness. If you have an expression like if(a == b, a + 37, b + 22), Toyfl will only evaluate the condition a == b. It will then return one of the two expressions in the then or else case without actually evaluating them. Nothing gets evaluated until it is really, really needed.

That's connected to a second weird feature of Toyfl:

2) Currying by Refactoring. Since Toyfl doesn't evaluate an expression until it is really, really needed, an expression returned from within a function needs to be context-independent. That is, it needs to be able to remember what relevant parameters were passed into the function when it was invoked. I handle that by creating a refactored copy of the body of the function. So, if you have a function like this:

f = (x, y) => x * y

and you invoke it like this:

g = f(a, b + c)

then what you get for g is actually a structure representing the expression a * (b + c).

Originally it rubbed me the wrong way to do this, because somewhere deep inside I have this feeling that code should remain static, but the effort required to carry out refactoring is actually pretty slight, and in a functional context it is very reliable.

However, this behavior means that recursive functions actually build large in-memory expressions. Memory is cheap these days, so I don't worry too much about that for ordinary use-cases, but I thought it would be interesting to solve the problem anyway. So I created:

3) Special syntax for tail recursion. I noticed that the simplest functional implementations of certain solutions required significantly more calculation than the simplest imperative implementation. I wanted a way to bring the efficiency of imperative solutions to my functional language, so I created the following syntax to bridge this gap.

fibonacci = (n) => (

    for {

        a = 1;

        b = 1;

        i = 1

    }

    let {

        nextA = b;

        b = a + b;

        a = nextA;

        i = i + 1

    }

    until i == n

).a

This looks like a loop in an imperative language, but effectively it turns into something like this:

fibonacci = (n) => innerFibonacci(1, 1, n)

    where innerFibonacci = (a, b, i) => if (i ==n, a, innerFibonacci(b, a + b, i + 1))

Normally Toyfl would build a huge structure out of this, but the special syntax tells Toyfl to fully evaluate each of the parameters a, b, i before the recursive invocation, so we can take advantage of tail recursion.

So that's Toyfl. It's been fun.

Monday, March 23, 2015

A functional database

I mentioned in an earlier post that I was again working on a functional programming language. Probably the reason it is going so well is that the language itself is not my main goal. What I really want to do is build a functional database.

On the face of it, this seems like a ridiculous idea, because in order to be purely functional you would need to pass the whole database in as a parameter to any query, and return the updated database whenever you make a change to it.

But put aside the ridiculousness of it for a moment and imagine the following:

1. Your database is represented as a linked list of data changing transactions, so changing the database is just a matter of consing the new transaction onto the transaction chain.

2. Every transaction in the database carries a timestamp and a checksum, where the checksum is built from the data in the transaction and the checksum of the last transaction.

3. The data storage is trusted and the checksum has a vanishingly low rate of collision, so the checksum at time T can act as a reliable proxy for the whole database at time T.

What good would this thing be? I think it could be very powerful for certain applications. Consider the following:

  • All changes to the database would be audited. In an important sense, the database itself is the audit trail.
  • It would be trivial to add a date/time parameter to any query and get back the result of the query as it would have been at any time in the past.
  • The checksum could be used to verify whether the results of a query did indeed come from a given database at a given time.
  • Brute-force backdoor changes to the transaction chain would show up in the checksums.
  • You could allow "alternate universes" to be forked off of the main database at any time, to experiment with different scenarios. These would exist in parallel time with the main database, but would be distinguished by their own checksums.
This database would be useful in environments where data needs to be reliable and auditable.

Wednesday, March 18, 2015

Nearly full circle: A functional programming language

My mind is like an attic, stuffed with everything I can't bear to get rid of, and occasionally I bring something out and work on it (only to put it back later).

The earliest posts that I wrote on this blog related to my plan to create a functional programming language. Over the last few weeks, I have actually done that. It's nothing pretty, but it really does work, with lambda expressions and everything.

I worked backwards, starting with how the language would work on the inside, without worrying about what it would look like on the outside. The final language isn't finished yet, but I created a kind of intermediate testing language to use for checking the architecture.

The testing language is not supposed to be pretty. My main consideration here was to create a language that could be parsed by a very simple parser (with no lookahead), so that meant leaving out a lot of syntactic sugar and salt.

Here is my first program in the testing language.

using System
using FBaseUnitTests
let plus
    method Program _test006plus [ Int32 , Int32 ]
let ifzero
    lazyMethod Program _test006ifzero [ Int32 , Int32 , Int32 ]
let func
    lambda [ Int32 ] [ x ]
        apply variable ifzero [
            variable x ,
            value Int32 1 ,
            apply variable plus [
                variable x ,
                apply variable func [
                    apply variable plus [
                        variable x ,
                        value Int32 -1
                        ]
                ]
            ]
        ]
let r
    apply variable func [ value Int32 16 ]

The most important thing here is that functions are first-class citizens. The variables plus and ifzero hold native methods (one of which has some lazy parameters), while the variable func holds a lambda expression.

Sunday, November 10, 2013

A chartreuse analogy

This is a quick follow-up on the last post, where I sketched an idea for a functional programming language where the lambda was implicit.

First, let me repudiate the whole idea of the unresolve metafunction. There's really no need for it, and it just makes things messy. I'm going to go back and cross that bit out.

Second, it might have occurred to you that, if the lambda is implicit in the way I described, then there is no distinction between values and expressions that take no parameters. At first, that seems like a problem, but I don't think it is. In a pure functional context, what is really the difference?

Of course, even the purest functional language must eventually sully itself by making contact with this sordid and messy world, and there you might have a problem, since your parameterless functions need to be evaluated in order for any side-effects to occur.

Which brings me to an idea about how to separate the pure (and provable) side of a program from the impure, messy side: Rather than trying to incorporate both worlds in one language, let them be separate languages.

Functional languages are bad at being imperative, and imperative languages are bad at being functional. So strictly separate the two by writing the core code in a functional language, and accessing it from the real world via an imperative language. Naturally, you want to be able to compile the two together, and you want the languages to have common features, but let it be clear when you are coding in one or the other, and let the compiler enforce the purity of the functional side.

You could think of the functional (sub-)language as being like the eremitical side of a Carthusian monastery, and the imperative (sub-)language as being like the cenobitic side. In a Carthusian monastery you have the hermits, whose lives are dedicated to prayer and contemplation, and the lay monks, who serve the hermits and maintain the monastery. The two are strictly separated and work in different domains, but are integrated with each other in a symbiotic way.


Saturday, November 9, 2013

Implicit lambdas and thusness: a silly idea

When my brother and I were out camping with our dad, we used to sit around the fire at night and ask all kinds of "what if" questions.

Tonight, I've gotten the kids to sleep, gotten the dishes washed, and did a little amateur skimming of the Romani internet. (Who couldn't love a language that calls a black widow spider yalakráno, and a pear tree ambrolin?)

Now I am tired, and unable to do anything except ask "what if?"

What if you had a functional programming language where the lambda was implicit, so any expression with an unresolved term was a lambda expression, and the unresolved terms were the parameters?

Obviously, you would need a couple of things to make this work. First, you would need to do away with a lot of syntactic sugar, because your parser needs to be able to operate without necessarily knowing anything about the referents of the symbols it is operating on. Second, you would need to have some way to recognize which input values to a lambda expression match which parameters.

But both of those are just conventional matters, not real theoretical difficulties. Let's say, for the sake of this post, that we have a syntax like Lisp, and parameters are ordered as they appear in the source code.

Now for the second bit: What if you were able to reference a function from within itself using a special term, similar to the way you can use the variable this, self or me in object-oriented languages?

For the sake of this post, let's call this special term thus.

If you had a language with both of these features, you could write a lambda expression for the exponentiation function as follows:

(if (eq x 1) 1 (multiply x (thus (subtract x 1))))

We assume the functions if, eq, multiply, thus, 1 and subtract are known, so the unresolved term is x, which is the sole parameter to the expression.

Now, what if you could take a lambda expression in this language and override some of the known terms, producing functions with similar structure but different behavior? So, maybe have a metafunction called unresolve that returns a term of an expression from the known space to the parameter space. So, if you wanted a function like factorial that took the parameter x and also some function to use instead of multiply, you could say:

(unresolve factorial multiply)

Then you could substitute a different function, like divide:


((unresolve factorial multiply) divide)

These are the sorts of vapid ideas a tired mind produces.

Monday, August 19, 2013

Recursive call optimization

In a previous post, I wrote the map function in F#, compiled it, and disassembled it, and expressed my disappointment that it was recursive.  What I had hoped for was something like tail call optimization.

Often, in my imperative programming career, I have had to choose between implementing a process in a recursive way or a looping way.  In one language that I work with, the number of nested blocks allowed is limited, so I must be very careful about how I build recursive solutions.  I usually view the loop implementation as inferior because it lacks the beauty of the recursive one, but usually it is more efficient.  Since functional programming relies extensively on recursion, however, it is often optimal to turn recursive function calls into loops at compile time.

Functions require memory to process calculations.  For convenience, I'll refer to the units of memory used as variables, even if (strictly speaking) they are not variable.  The effective scope of a variable begins when it is assigned a value, and it ends at the last point that the value is referenced.  Since some control branches of a function may not reference a variable, the scope may cover parts of the expression tree, rather than a linear range of lines of code.
If no variables are in scope at the time of a recursive call, then the recursive call can be implemented as a pair of loops.  The first loop of the pair represents the descent into the recursive call, while the second loop represents the return.
Since most variables are out of scope by the end of a function, a recursive call at the end of the function can usually be implemented as a single loop (since nothing is done on return), which is called tail call optimization.  However, the general rule allows recursive calls to be implemented as a pair of loops anywhere, provided the recursive call falls outside the scope of all other variables.

Here is a contrived example of a recursive C# method that can be optimized in this way:

public int TestFunction(int n) {
  // Variable r is defined here, but not in scope yet,
  // because its initial value will not be used
  int r;

  // Start of scope for t
  int t = n * n + n;

  // Start of scope for j
  int j = 2 * t - 13;

  // Recursion should stop somewhere
  if (t < 100000 && j < 100000) {
    // Scope of t and j ends here on this execution branch.
    // Scope of r starts here
    r = TestFunction(t + j) - 7;
  } else {
    // Scope of t and j ends here on this execution branch.
    // Scope of r starts here
    r = t - j;
  }

  // Scope of r ends here
  // Scope of k starts here
  int k = 3 * r;

  // Scope of k ends here
  return k;
}

The optimized implementation of this recursive function is as follows:

public int TestFunction(int ntop) {
  int n;
  int t;
  int j;
  int r;
  int k;
  int loopCount = 0;

  // Descent loop
  n = ntop;
  while(true) {
    t = n * n + n;
    j = 2 * t - 13;
    if (t < 100000 && j < 100000) {
      n = t + j;
      ++loopCount;
    } else {
      break;
    }
  }

  // Bottom of call processing
  r = t - j;
  k = 3 * r;

  // Return loop
  while (loopCount > 0) {
    r = k;
    r -= 7;
    k = 3 * r;
    --loopCount;
  }

  return k;
}

Thursday, August 8, 2013

Infinite lists and function composition in C#

Last night I was critical of the way that F# handled lists.  This morning, I decided to back up my critique by implementing the functionality I was talking about in C#.  I have put a simplified version of it in a gist.  This was easily one of the most rewarding mornings of coding I have had in a long time.

First, I created FunctionalList<T>, which can be initialized with a lambda expression.  So, where Haskell might have the following:

    let squares = [x | a <- [3,6..27], let x = a * a]
    show squares
    "[9,36,81,144,225,324,441,576,729]"

I can now do the following in C#:

    FunctionalList<int> squares = HigherOrderFunction<int, int>.Map
        (
        a => a * a,
        FunctionalList.Series(3, 6, 27)
        );
    Console.WriteLine(squares);
    [9, 36, 81, 144, 225, 324, 441, 576, 729]

So far, I have implemented the higher order functions Map, FoldLeft and FoldRight, as well as basic functionality for the lists such as Head, Tail, Take, Skip and Construct.  I can now do what I said F# should be able to do.  For the following Haskell expression:

    head $ map' (^2) [9,19..99999999]

I can write in C#:

    HigherOrderFunction<int, int>.Map
        (
        x => x * x,
        FunctionalList.Series(9, 19, 99999999)
        ).Head

Granted, my C# version is syntactically clunky, but it returns immediately with the correct result.  It does not generate a huge in-memory list, and it works by composing the square function (x => x * x) over the function that returns the elements of the underlying series (x => x * 10 + 9).

So there.

What did I expect?

In my last post, I expressed some disappointment that my version of the map function in F# processed every element of the list.  But why should I be disappointed?  Isn't that exactly what I asked it to do?

Here's why:  In Haskell, I can use my homegrown map function against a huge list, and get the head element of that list, as follows:

head $ map' (^2) [9,19..99999999]

This returns immediately with the answer "81".  If I try something similar in F#, I get a long pause and an out-of-memory exception.

So why doesn't Haskell do exactly what I told it to, and try to process every element of the list?  I've been playing with reproducing the behavior in C#, and here is what I think is going on:

As an imperative programmer, I expect my arrays (or lists, or whatever) to be data structures in memory.  When I try to get the nth element of a list, I expect the machine to start at the head, walk n elements, and then return whatever it finds.  However, in the case of something like [9,19..99999999], that model is pointless.  The reason is that the value I find at the nth element is dependent on n, so it is really more like a function, λ n → 10n + 9.

If we treat this huge list-like thing as a function instead of a data structure, then mapping another function onto it is just a matter of composition.  That's the magic I was hoping to find hidden in the FSharpList class, but I did not find it.

One last critique of the way my F# version of map was compiled:  It compiled into a recursive method, but that will end up using space on the stack every time it is called, and it could have been done more efficiently as a loop.  It is possible that the JIT compiler will resolve this when the procedure is actually rendered into runtime code, but I am not sure about that.

Tuesday, August 6, 2013

map in F# and IL assembly

I played with F# last night, and it made me realize how mature Haskell is compared to F#. It's too bad you can't compile Haskell code into a .NET assembly.

The map function in F# is pretty concise, compared to the wordy C# version.  Here it is:

let rec map (f : ('a -> 'b), ps : 'a list) =
    if ps = list.Empty
        then list.Empty
        else list.Cons(f(ps.Head), map(f, ps.Tail))

This is not so terribly different from the Haskell version of map, which could be written as follows:

map' :: (t -> a) -> [t] -> [a]
map' _ [] = []
map' f (p:ps) = (f p) : (map' f ps)

There are a number of differences (including the F# requirement that we use rec to indicate a recursive function), but the important similarities are that we can write a higher order function, and we have generic types.

So, what does the F# version look like in IL assembly?  To begin with, the method header is reassuring, because it shows that the generic type parameters go all the way down to the IL assembly level:

.method public static class [FSharp.Core]Microsoft.FSharp.Collections.FSharpList`1<!!b> map<a,b>
(
class [FSharp.Core]Microsoft.FSharp.Core.FSharpFunc`2<!!a,!!b> f,
class [FSharp.Core]Microsoft.FSharp.Collections.FSharpList`1<!!a> p
) cil managed

We can see that F# provides its own list and function objects, which makes sense given that this functionality isn't standard for the rest of the .NET world.  However, the body of the function reveals something disappointing.  It processes the whole list!  Rather than paste the entire function body here, I'll just write a brief pseudocode paraphrase of it:

IL_0002: push ps.Length onto the stack
IL_0007: if non-zero branch to IL_0013
IL_000d: push an empty list onto the stack
IL_0012: return
IL_0013: push the head of ps onto the stack
IL_001a: invoke function f and push the result onto the stack
IL_0021: push the tail of ps onto the stack
IL_0026: call map and push the result onto the stack
IL_002b: call cons and push the result onto the stack
IL_0030: return

So, from what I have seen here, though F# may look like Haskell on the surface, underneath the covers it is far less advanced.  However, the IL asm natively supports some functionality (like generic type parameters) that suggest it would be possible to build a really powerful functional language that compiled into IL asm.

Monday, August 5, 2013

map in C#

I was recently at a meeting where someone remarked that you could not write a higher-order function like map in C# because of strong typing.  In fact, you can write the map (and similar) functions in C# using generic type parameters, but there are other weaknesses.

For the curious, here is one way to write map in C#:

class HigherOrderFunction<T1, T2> {
  public delegate T2 function(T1 p);

  public static IEnumerable<T2> Map(function f, IEnumerable<T1> ps) {
    List<T2> result = new List<T2>();

    foreach (T1 p in ps) {
      result.Add(f(p));
    }

    return result;
  }
}

The problem with this function is that it is not lazy.  If the input array is huge, but you only need a few values out of it, you still have to process all of the other values before you can get the one you want. If you want lazy evaluation here, the code becomes very convoluted.

In my next post, I'll write the map function in F#, compile it, disassemble it, and see how it works under the covers.

Wednesday, June 19, 2013

Magma

Magma is a big deal in functional programming.

I know this, because I have attended two study groups on functional programming, and each time someone has pulled up the Wikipedia page on Magma.

The day after the last such meeting, I spent the morning wondering whether I had made a huge mistake trying to learn a programming paradigm that required a Ph.D. in an obscure branch of mathematics in order to get started.  But then it dawned on me (I think) what it's all about:
Groups are the higher order types that higher order functions operate on.
So, a magma is a set of things (M), together with a function (.) that takes two of those things as a parameter and returns a third one.  We could start with the basic magma and build up to groups, but I would rather go backwards from abelian groups.

Abelian Groups

  • The function is commutative,  so a.b = b.a
  • There is an identity element, so a.I = a
  • The set contains inverse elements, so for any a, there is a b such that a.b = I
  • The function is associative, so a.(b.c) = (a.b).c
An example of an abelian group would be the set of integers and the addition operation.  So,

  • The function is commutative,  so a + b = b + a
  • There is an identity element 0, so a + 0 = a
  • The set contains inverse elements, so for any a, there is a b (in this case, -a) such that a + b = 0
  • The function is associative, so a + (b + c) = (a + b) + c
These are handy for higher order functions like fold.  If you wanted to fold the addition operation across a tree of integers, you just need to start with the identity (0) and add in all of the integers in any order.

But the higher order fold function cannot be used to apply any old binary operation to any old tree of values from the set that the operation applies to.  If there is no identity element in the set, then the fold function needs an additional parameter to start with.  If the function is not associative, then you will need more memory to fold over a tree, because you have to fully process each branch.

And on it goes.

Friday, April 5, 2013

OK, I'm lazier than I thought

After toying with the algorithm to write the assembly code, I've decided to take a step back and write a precompiler into C# instead.  The reason for this is that writing the assembly code involves a lot of work I am not interested in for this project, and I believe I can accomplish the majority of what I am looking for with a precompiler.  As a later phase, if I am still interested, I'll use the precompiler as a basis for a program to write assembly code.

My next step is going to be to write a pseudo-compiler.  This will build the C# code in memory and compile it into an executable using standard libraries.

Maybe instead of calling it a "pseudo-compiler" I should call it a "higher-order compiler".  Makes me sound smarter.

Monday, March 18, 2013

Currying

In the last post, I worked out how the C# compiler represents lambda expressions as classes.  Naturally, then, curried functions should be represented as classes, too.  There are several different ways to handle this.  The method sketched out in pseudo-code here is only one of them, where I have tried to limit redundancy and memory at the possible expense of performance.  The reason I am doing it this way is because I have a motto: "Make it first, then make it fast".  Right now I can see the memory and redundancy tradeoff, but I don't even know how much performance I am buying by trying to optimize. Later, if I encounter performance problems, then I can come back and evaluate how much memory is worth how much speed.

The approach here is to declare the uncurried function as a class like the one we saw in Fig. 3.1, then to declare a chain of classes to represent the stages of currying.

// This is the uncurried function
class MyFunction {
    // Environment Part
    public int32 z;
    public int32 k;

    // Control Part
    public DateTime Value(DateTime x, int32 y) {
        return x.AddYears(int.Parse(y.ToString()) + z * k);
    }

    // Currier
    public MyFunctionCurried1 Curry(DateTime x) {
        return new MyFunctionCurried1(this, x);
    }
}

// This is the function with one parameter curried
class MyFunctionCurried1 {
    private MyFunction _curriedFrom;
    private DateTime x;

    public MyFunctionCurried1(MyFunction curriedFrom, DateTime x) {
        this._curriedFrom = curriedFrom;
        this.x = x;
    }

    // Control Part
    public DateTime Value(int32 y) {
        return this._curriedFrom.Value(x, y);
    }

    // Currier
    public MyFunctionCurried2 Curry(int32 y) {
        return new MyFunctionCurried2(this, y);
    }
}

// This is the function with both parameters curried
class MyFunctionCurried2 {
    private MyFunction _curriedFrom;
    private int32 y;

    public MyFunctionCurried2(MyFunctionCurried1 curriedFrom, int32 y) {
        this._curriedFrom = curriedFrom;
        this.y = y;
    }

    // Control Part
    public DateTime Value() {
        return this._curriedFrom.Value(y);
    }
}

Fig. 4.1: A curry chain (C#-like pseudocode)

So far, so good.  In the next post I'll introduce function signatures as interfaces, so I can pass functions (and their curried variants) around to higher order functions.

Disassembled Expression

For this post, I took the sample function body from Fig. 1.3 and compiled it using a C# 2010 compiler (version 4.0.30319.1), then disassembled the executable into IL.

The result was pretty interesting, and not too different from what I imagine happens in a modern compilable functional language.

The most interesting thing, to me, is that the lambda expression was turned into a class (with the automatically generated name <>c__DisplayClass1).  Translating from CIL into C#-like pseudo-code, the class looks something like this:
class <>c__DisplayClass1 {
    public int32 z;
    public int32 k;

    public DateTime <Main>b__0(DateTime x, int32 y) {
        return x.AddYears(int.Parse(y.ToString()) + z * k);
    }
}

Fig. 3.1: Pseudo-code representation of an internally generated class for a C# lambda expression
This class basically represents the lambda expression.  The environment part is represented by the public members z and k, and the method <Main>b__0 represents the control part.  To invoke this lambda expression, first the environment part needs to be filled out by creating an instance of the class, then setting the values in the environment part (creating a closure).  The closure could then be passed around and eventually evaluated by calling the method <Main>b__0 with the parameters x and y.

So, for me, the next question is going to be this:  If we generate classes like this to represent lambda expressions, how will we curry functions, and how will we create higher order functions?  Those are likely to be the topics of the next post.

Closure

I'm going to start with creating a closure.  Landin [1] defined a closure as a lambda expression whose open variables have been closed by lexical environment, creating a closed expression.  So, let's start with a lambda expression as represented in C#:

(x, y) => x.AddYears(int.Parse(y.ToString()) + z * k)
Fig. 1.1: A lambda expression in C#

There is a lot we don't know for certain about this lambda expression.  We can infer that x is a DateTime, but that isn't necessarily the case, and y could be almost anything.  This lambda expression at least needs type declarations for these variables, so let's provide them:

Func<DateTime, int, DateTime> f =
    (x, y) => x.AddYears(int.Parse(y.ToString()) + z * k);

Fig. 1.2: A lambda expression in C#, partially disambiguated by its environment

Here, we have implicitly defined the types of x and y by assigning the lambda expression to a delegate whose type has been made explicit.  Now, we have a lambda expression with two open variables, z and k.  In order to create a closure, we need to close the variables z and k using the lexical environment of the expression.

Suppose the expression appears in the following environment:

DateTime g() {
    int z = DateTime.Now.Year;
    int k = 13;
    Func<DateTime, int, DateTime> f =
        (x,y) => x.AddYears(int.Parse(y.ToString()) + z * k);
}

Fig. 1.3: A lamba expression in its lexical environment

In this environment, we now have all of the information we need to populate the closure.  In Landin's terms, a closure has an "environment part" and a "control part".  Using those terms, our closure record (for the lambda expression only) could look like this:

Environment Part {
    Symbol {
        Name: z;
        Type: int;
        Value: DateTime.Now.Year;
    }
    Symbol {
        Name: k;
        Type: int;
        Value: 13;
    }
}
Control Part {
    Parameter {
        Name: x;
        Type: DateTime;
    }
    Parameter {
        Name: y;
        Type: string;
    }
    Value Expression: x.AddYears(int.Parse(y.ToString()) + z * k);
}
Fig. 1.4: A preliminary mockup of a closure record
In the next post, I'll unpack the structure of the value expression.

[1] P. J. Landin (1964), The mechanical evaluation of expressions