Thoughts on Crafting Interpreters - Part 1
Today I would like to discuss the Crafting Interpreters book by Robert Nystrom.
It’s a book about designing an interpreter for a dynamic programming language called Lox. Well, not exactly. It’s split into two parts - in the first is about crafting a tree-walking interpreter, and the second is about writing a complete bytecode VM.
I’m also going to write two posts, one for each part of the book. This particular one is on the first part, which I have completed a long time ago. A year and a half, actually. So most of the things I’m going to talk about here are not fresh.
In the book, Robert chose Java as a language for the first half of the book. The reasoning behind it is practical and well thought out, but I simply don’t like Java. So instead, I chose Clojure as an implementation language, because I felt that it would be a good exercise, allowing me to sharpen my skills a bit more.
TL;DR for this post could be:
Clojure is a nice language, and fits this kind of task well enough, as the tree-walking interpreter is largely a data-manipulation task. Clojure’s protocols provide a great way of solving the expression problem with no boilerplate when compared to the Visitor pattern. Immutability helps tremendously, eliminating bugs related to state management, with little, yet noticeable cost in performance, for this particular task, at least.
Plus a few complaints about the book.
Now, if you’re interested in more, I welcome you to read the rest of the post. But first, let’s get this straight - I’ll try to follow the book’s structure here, but no promises. While the book is well-structured, it’s hard to reference specific parts of the code, as some pieces of the interpreter are changed frequently through the chapters.
I’m glad I’ve kept the repository around, as it contains all of the commit history, however, until the second part of the book I was loose on commit structure. E.g. my commits for the first half look like
...
a6f69c8 * use protocols to solve expression problem
7412fb2 * WIP AST
4068faf * WIP parser
7990eb4 * support escapes in strings
d9b3f02 * move tokenizer to its own namespace
3d8a602 * add column info
186095f * tokenizer
While for the second half of the book it is structured much better:
...
88dd209 * chapter 19: Strings
6116263 * chapter 18: Types of Values
742364c * chapter 17: Compiling Expressions
f1ceb50 * Chapter 16: Scanning on Demand
da0149c * chapter 15: A Virtual Machine
f2d8a18 * chapter 14: Chunks of Bytecode
...
So, forgive me if I mix things up in this post - the second part will be better structured. And, to be clear, this post isn’t really about the code, but about the book, although there’s plenty of code here. I will highlight interesting differences in Clojure and Java approaches but I would not consider this the main topic of this post.
Scanning
For the first chapter, we start with scanning. The chapter introduces various concepts, like error handling, but what I’d like to focus on here is token handling instead.
In the book, Robert uses an enumeration type, which is probably how it is done in the Java world:
package com.craftinginterpreters.lox;
enum TokenType {
// Single-character tokens.
LEFT_PAREN, RIGHT_PAREN, LEFT_BRACE, RIGHT_BRACE,
COMMA, DOT, MINUS, PLUS, SEMICOLON, SLASH, STAR,
// ...
}
Note, I don’t know Java - never learned it for real, and went straight to Clojure. So if you’re curious if it’s OK to go into Clojure without knowing Java first - it is. Although, I did have some Scheme background.
So unlike the Java solution, I went with Clojure’s way of enumerating things. Maps:
(def single-token-type
{\( :left_paren
\) :right_paren
\{ :left_brace
\} :right_brace
\, :comma
\. :dot
\- :minus
\+ :plus
\; :semicolon
\* :star
\= :equal
\< :less
\> :greater
\/ :slash
\! :bang})
The same goes for all other token types. It makes sense, as we’re essentially using the Enum as a lookup table for character dispatch. Hence, instead of doing this:
char c = advance();
switch (c) {
case '(': addToken(LEFT_PAREN); break;
case ')': addToken(RIGHT_PAREN); break;
case '{': addToken(LEFT_BRACE); break;
case '}': addToken(RIGHT_BRACE); break;
case ',': addToken(COMMA); break;
// ...
}
We can do this:
;; ...
(case c
(\( \) \{ \} \, \. \- \+ \; \*)
(recur current line
(conj tokens (make-token (single-token-type c) (str c) line))
error?)
(\= \! \< \>)
;; ...
)
In other words, we share the body for all cases, and do a character lookup in the table to get the token type. This eliminates just a little bit of duplication.
Another thing to note, which is more important in my opinion, is that this parsing process is stateless.
In the Java version, the addToken
method modifies this class state.
While handy, I don’t like this approach, it makes things harder to test.
My version uses a conventional Clojure loop
and simply adds tokens to the collection, which is then returned from the loop:
(defn tokenize [source]
(let [at-end? (complement (partial > (count source)))]
(loop [current 0 ;; current character
line 1 ;; current line
tokens [] ;; tokens
error? false] ;; error indication
(if (at-end? current)
;; when finished return a tuple containing tokens and error
;; indication
[tokens error?]
;; otherwise, we take the `nth` character from the source, and
;; dispatch on it
(let [c (nth source current)
current (inc current)]
(case c
(\( \) \{ \} \, \. \- \+ \; \*)
;; Rebind the current char to the new `current` char;
;; Use the same line, as none of these introduce a line change;
;; Add new token to an immutable vector of `tokens`;
;; pass `error?` as is, as no errors were introduced.
(recur current line
(conj tokens (make-token (single-token-type c) (str c) line))
error?)
;; ... rest of the clauses follow the same idea ...
))))))
So, tokenization is fully self-contained, and doesn’t require any state - it just accepts the string. I later added the column counting to this code, but it’s beside the point now.
Representing Code
This chapter mostly goes on how to read and define a BNF-like grammar. However, what’s really interesting is the second part of this chapter - generating the tree representation generation code. It is often referred to as metaprogramming, i.e. programs writing programs.
At first, I was puzzled by this - do we really need to write Java to generate some classes for us based on grammar? Well, I mean, there are libraries that can do that at runtime without an issue, like instaparse, but in this case, we’re generating stuff even before compilation. In other words, the book suggests generating source code.
Well, that’s just macros, I thought to myself, and after studying a bit what the intention is, I made the following macro:
(defmacro define-ast [& specs]
(list* 'do
(for [spec specs]
(let [[name fields] (str/split spec #"\s+:\s+")
fields (->> (str/split fields #",?\s+")
(partition 2)
(map second)
(reduce (fn [fields name]
(conj fields (symbol name)))
[]))]
`(defrecord ~(symbol name) ~fields)))))
It expands the following grammar into a list of record definitions:
cljloc.ast> (macroexpand-1
'(define-ast
"Binary : Expr left, Token operator, Expr right"
"Grouping : Expr expression"
"Literal : Object value"
"Unary : Token operator, Expr right"))
(do
(clojure.core/defrecord Binary [left operator right])
(clojure.core/defrecord Grouping [expression])
(clojure.core/defrecord Literal [value])
(clojure.core/defrecord Unary [operator right]))
However I realized that it is a bit overkill to do it this way, so I ditched the macro completely. After all, writing these records requires much less work in Clojure than it is in Java, but we’ll get to that in a bit.
For comparison, here’s a definition from the book:
static class If extends Stmt {
If(Expr condition, Stmt thenBranch, Stmt elseBranch) {
this.condition = condition;
this.thenBranch = thenBranch;
this.elseBranch = elseBranch;
}
@Override
<R> R accept(Visitor<R> visitor) {
return visitor.visitIfStmt(this);
}
final Expr condition;
final Stmt thenBranch;
final Stmt elseBranch;
}
And here’s the Clojure version:
(defrecord If [condition then else])
So, why does the book’s code appear to be more involved?
The Visitor pattern
There’s a pretty specific problem, often appearing in projects, like this - the expression problem. In the Java world, one of the solutions is the Visitor pattern, which the Book then explains thoroughly. I will not go into full details here, you’re free to read the book if you’re interested.
The script, that generates classes for statements also generates methods required for the Visitor pattern to work. We then can override these methods for each resulting type, and that’s basically it. In the book, it is explained by defining a pretty printer for AST, printing parsed data as S-expressions. Oh, sweet irony.
Here’s how it is done for the Binary
expression:
package com.craftinginterpreters.lox;
class AstPrinter implements Expr.Visitor<String> {
String print(Expr expr) {
return expr.accept(this);
}
// ...
private String parenthesize(String name, Expr... exprs) {
StringBuilder builder = new StringBuilder();
builder.append("(").append(name);
for (Expr expr : exprs) {
builder.append(" ");
builder.append(expr.accept(this));
}
builder.append(")");
return builder.toString();
}
@Override
public String visitBinaryExpr(Expr.Binary expr) {
return parenthesize(expr.operator.lexeme,
expr.left, expr.right);
}
// ...
}
Here’s how it is done in Clojure:
;;; protocols.clj
(ns cljlox.protocols
(:require
[clojure.string :as str]))
(defprotocol IStringable
(tostring [self]))
;;; ast.clj
(ns cljlox.ast
(:require [cljlox.protocols :refer [IStringable tostring]])
(:import (cljlox.tokenizer Token)))
(defrecord Binary [left, ^Token operator, right]
IStringable
(tostring [{:keys [operator left right]}]
(format "(%s %s %s)" (tostring operator) (tostring left) (tostring right))))
And that’s it. But it doesn’t end here, because Clojure was designed in such a way that the expression problem is not a problem.
We can extend other types with our protocol, even after these types are defined.
For, instance, here I extend the Object
to handle numbers, and any other objects, as well as nil
which is a special case for protocols in Clojure:
(extend-protocol IStringable
Object
(tostring [self]
(cond (double? self) (-> self str (str/replace #"\.0$" ""))
(nil? self) "nil"
:else (str self)))
nil
(tostring [_] "nil"))
And there you have it - we don’t need the Visitor pattern in Clojure, and we’re free to extend existing methods with new classes at any point. The book demonstrates that the pattern works by defining overrides for all other data types, and the following code:
public static void main(String[] args) {
Expr expression = new Expr.Binary(
new Expr.Unary(
new Token(TokenType.MINUS, "-", null, 1),
new Expr.Literal(123)),
new Token(TokenType.STAR, "*", null, 1),
new Expr.Grouping(
new Expr.Literal(45.67)));
System.out.println(new AstPrinter().print(expression));
}
Which prints (* (- 123) (group 45.67))
.
We can do the same in the REPL:
cljlox.ast> (tostring
(ast/->Binary
(ast/->Unary
(Token. :minus "-" nil [0 0])
(Token. :literal "123" nil [0 0]))
(Token. :star "*" nil [0 0])
(ast/->Grouping
(Token. :literal "45.67" nil [0 0]))))
"(* (- 123) (do 45.67))"
It is different a bit, as I’m passing a line number and a column as a tuple [0 0]
but other than that it is the same.
If you’re wondering, Token
also implements the IStringable
protocol:
(defrecord Token [type lexeme literal pos]
Object
(toString [_]
(str lexeme))
IStringable
(tostring [_]
(str lexeme)))
We’ll see more of the protocols stuff in a bit, so let’s move on.
Parsing Expressions
The parser described in the book is based on the recursive descent algorithm. I like it, it’s simple and effective and works reliably.
However, it is again done in the Java style, mutating private things in the Parser class. I don’t like it, it’s error-prone and hard to track what’s happening.
Instead, I made it functional and pure:
(defn parse
"Parse a sequence of `Token`s into a sequence of expressions."
[tokens]
(try
(loop [exprs []
n 0]
(if (< n (dec (count tokens)))
(let [[expr n] (expression tokens n)]
(recur (conj exprs expr) n))
exprs))
(catch ExceptionInfo e
(let [{:keys [tokens n]} (ex-data e)
token (get tokens n)
[line col] (:pos token)]
(log/errorf "[%s:%s] %s at '%s'" line col (ex-message e) (str token))))))
Each rule then simply returns the expression object, and the next n
:
(defn- consume [tokens n type message]
(if (#{type} (:type (get tokens n)))
(inc n)
(throw (ex-info message {:tokens tokens
:n n}))))
(defn- current [tokens n]
(if-some [token (get tokens n)]
token
(throw (ex-info "Unfinished expression" {:tokens tokens
:n (dec n)}))))
;; ...
(defn- equality [tokens n]
(loop [[expr n] (comparison tokens n)]
(if (and (not= expr :eof) (#{:bang_equal :equal_equal} (:type (current tokens n))))
(let [n (inc n)
operator (previous tokens n)
[right n] (comparison tokens n)]
(recur [(->Binary expr operator right) n]))
[expr n])))
Yes, this involves passing n around, but if the function API is consistent, all is well.
All functions in the parser accept a vector of tokens
and the current n
for a position in the array.
No exceptions to that.
Why would you want anything else?
And it’s so much easier to test too:
cljlox.parser> (equality [{:type :number, :lexeme "1", :literal 1.0, :pos [1 1]}
{:type :plus, :lexeme "+", :literal nil, :pos [1 3]}
{:type :number, :lexeme "1", :literal 1.0, :pos [1 5]} ;; starting from here
{:type :equal_equal, :lexeme "==", :literal nil, :pos [1 7]}
{:type :number, :lexeme "2", :literal 2.0, :pos [1 10]}
{:type :eof, :lexeme "EOF", :literal nil, :pos [1 11]}]
2)
[{:left {:value 1.0},
:operator
{:type :equal_equal, :lexeme "==", :literal nil, :pos [1 7]},
:right {:value 2.0}}
5]
cljlox.parser> (equality [{:type :number, :lexeme "1", :literal 1.0, :pos [1 1]} ;; starting from here
{:type :plus, :lexeme "+", :literal nil, :pos [1 3]}
{:type :number, :lexeme "1", :literal 1.0, :pos [1 5]}
{:type :equal_equal, :lexeme "==", :literal nil, :pos [1 7]}
{:type :number, :lexeme "2", :literal 2.0, :pos [1 10]}
{:type :eof, :lexeme "EOF", :literal nil, :pos [1 11]}]
0)
[{:left
{:left {:value 1.0},
:operator {:type :plus, :lexeme "+", :literal nil, :pos [1 3]},
:right {:value 1.0}},
:operator
{:type :equal_equal, :lexeme "==", :literal nil, :pos [1 7]},
:right {:value 2.0}}
5]
Try doing that with a class that mutates current
with every advance
call.
Polymorphic dispatch
And I must note, that the resulting AST is printed as a sequence of plain maps, each element is actually a class:
cljlox.parser> (-> [{:type :number, :lexeme "1", :literal 1.0, :pos [1 1]}
{:type :plus, :lexeme "+", :literal nil, :pos [1 3]}
{:type :number, :lexeme "1", :literal 1.0, :pos [1 5]}
{:type :eof, :lexeme "EOF", :literal nil, :pos [1 6]}]
parse
first
class)
cljlox.ast.Binary
This is important because in Clojure we have two kinds of polymorphic dispatch. Earlier in this post I’ve mentioned protocols, which is a class-based dispatch. The other one is multimethod-based dispatch, which is a runtime data polymorphism.
Both allow for creating generic functions, just in a bit different flavor. For instance, here’s how multimethods are often described:
(defmulti interact (fn [a b] [(:type a) (:type b)]))
Here we have a generic function interact
which expects two arguments, each containing a :type
key, like:
(def ship
{:type :ship,
:name "Buran"})
(def rock
{:type :asteroid,
:mass {:value 420000 :unit :kg}})
Now we can define methods for these two to interact:
(defmethod interact [:ship :ship] [this other]
(println (format "%s docks %s" (:name this) (:name other))))
(defmethod interact [:ship :asteroid] [this {{:keys [value unit]} :mass}]
(println
(format "%s destroys asteroid of mass %s %s"
(:name this) value (name unit))))
(defmethod interact [:asteroid :ship] [{{:keys [value unit]} :mass} other]
(println
(format "asteroid of mass %s %s pierces the %s"
value (name unit) (:name other))))
(defmethod interact [:asteroid :asteroid] [this other]
(println
(format "asteroid of mass %s %s collides with asteroid of mass %s %s"
(-> this :mass :value) (-> this :mass :unit name)
(-> other :mass :value) (-> other :mass :unit name))))
This means that the interact
function receives two maps, takes a :type
key from each, and forms a tuple [type-a type-b]
.
Then it looks as if it has a definition for said tuple, in our case we’ve defined a permutation of methods for :ship
and :asteroid
.
So if we call interact
with our objects, we would see this:
user> (interact ship rock)
Buran destroys asteroid of mass 420000 kg
nil
user> (interact rock ship)
asteroid of mass 420000 kg pierces the Buran
nil
user> (interact rock rock)
asteroid of mass 420000 kg collides with asteroid of mass 420000 kg
nil
It makes sense to use multimethods when dealing with data, and while our AST is data, I choose protocols, because we don’t really need to dispatch over a combination of values. Protocols are also more efficient, as they rely on JVM internal mechanisms for polymorphic dispatch. Additional reasoning was that at work we don’t really use Protocols, so I wanted to try them out in something involved.
A small downside is that it is a bit harder to represent plain data as AST nodes, as I have to use conversion functions, like ->Binary
that takes a map, and converts it to a record.
But it’s better to use Protocols here, so I don’t mind some extra preparation steps.
Especially since it really only affects the tests, not the actual code.
Evaluating Expressions
And we’re back to Protocols vs Visitors.
The book, once again, uses the Visitor pattern to implement all necessary methods to evaluate our syntax tree as a program. I won’t go and repeat the comparison, instead, here’s how I’ve done it in CljLox:
;; cljlox.evaluator
(defprotocol IInterpretable
(evaluate [self env locals]))
(extend-type Literal
IInterpretable
(evaluate [{value :value} _ _]
value))
(extend-type Grouping
IInterpretable
(evaluate [{expr :expression} env locals]
(evaluate expr env locals)))
;; ...
(extend-type Unary
IInterpretable
(evaluate [{:keys [operator right]} env locals]
(let [right (evaluate right env locals)]
(case (:type operator)
:minus (do (check-number-op! operator right)
(- right))
:bang (not (truth? right))
(runtime-error "Unsupported unary operator" {:token operator})))))
;; ...
The key point here is that Protocols are open, it’s possible to extend them from other parts of the system.
It’s doable in both ways too - you can extend an existing protocol with a new class, or you can give an existing class a new protocol.
In this project, I use both extend-protocol
and extend-type
to provide all necessary method implementations.
Statements and State
Now we have to deal with the state.
(defonce *env (atom {}))
That’s it.
Control Flow
Implementing control flow is an interesting…
Just kidding, let’s talk about state.
Statements and State
So, state.
Yes, I’m using an atom
here to hold the state of the interpreter.
Probably I could have used a volatile
instead, but I chose atom
for some reason.
There are no threads in CljLox, at least not yet, so volatile
would work just as fine.
Later I switched to volatile
, though.
I’ve changed/created the evaluate
method for Var
, Variable
and Assign
nodes of the Ast:
(extend-type Var
IInterpretable
(evaluate [{:keys [name initializer]}]
(swap! *env
assoc
(:lexeme name)
(when (some? initializer)
(evaluate initializer)))
nil))
(extend-type Variable
IInterpretable
(evaluate [{:keys [name]}]
(let [val (get @*env (:lexeme name) ::not-found)]
(if (= val ::not-found)
(runtime-error (format "Undefined variable '%s'." (:lexeme name)) {:token name})
val))))
(extend-type Assign
IInterpretable
(evaluate [{:keys [name value]}]
(let [val (evaluate value)]
(if (contains? @*env (:lexeme name))
(swap! *env assoc (:lexeme name) val)
(runtime-error (format "Undefined variable '%s'." (:lexeme name)) {:token name})))
val))
This, however, only handles the global state, so I later had to change it to this:
(defn make-env
([] (make-env nil))
([parent]
(atom {:enclosing parent
:values {}})))
(defonce *global-env (make-env))
;; ...
(defn set-variable [env var value]
(assoc-in env [:values var] value))
(extend-type Var
IInterpretable
(evaluate [{:keys [name initializer]} env]
(swap! env
set-variable
(:lexeme name)
(when (some? initializer)
(evaluate initializer env)))
nil))
(defn get-variable [env var]
(let [env @env
values (:values env)
name (:lexeme var)]
(if (contains? values name)
(get values name)
(if-some [enclosing (:enclosing env)]
(recur enclosing var)
(runtime-error (format "Undefined variable '%s'." name) {:token var})))))
(extend-type Variable
IInterpretable
(evaluate [{:keys [name]} env]
(get-variable env name)))
(extend-type Assign
IInterpretable
(evaluate [{:keys [name value]} env]
(let [val (evaluate value env)]
(if (contains? @env (:lexeme name))
(swap! env assoc (:lexeme name) val)
(runtime-error (format "Undefined variable '%s'." (:lexeme name)) {:token name})))
val))
(extend-type Block
IInterpretable
(evaluate [{:keys [statements]} env]
(let [env' (make-env env)]
(doseq [statement statements]
(evaluate statement env')))))
That’s a lot of code to read, so I’ll outline the changes.
First, now the evaluate
method accepts two arguments, instead of just one.
Yes, we’re now passing the environment around.
In a non-functional interpreter, we could just set the current environment of the Evaluator
class, once we introduce new or leave a scope but here we can’t do that.
Err, we can, I just chose not to, because again - it is much easier to reason about the code.
The environment is just a map wrapped into an atom, after all.
Control Flow
Aside from adding a few new extensions to the protocol:
(extend-type If
IInterpretable
(evaluate [{:keys [condition then else]} env]
(let [test (evaluate condition env)]
(if (truth? test)
(evaluate then env)
(when else
(evaluate else env))))))
(extend-type Logical
IInterpretable
(evaluate [{:keys [left operator right]} env]
(let [left (evaluate left env)]
(case (:type operator)
:or (if (truth? left)
left
(evaluate right env))
:and (if (not (truth? left))
left
(evaluate right env))))))
we also implement loops in this section.
While I like the approach of implementing while
in the host language, and for
terms of our definition of while
, honestly, I would prefer a tour about tail call elimination instead.
It would probably make things more complicated, but I feel that it would bring more value to the table.
Since this book teaches how to create a language, I think it is worth mentioning the tail call recursion approach at the very least.
When I was programming in C and had no idea an infinite recursion was even possible, learning that there are some languages that can do that was a huge revelation. How many algorithms could I have written in far less complicated ways, by using the continuation passing style? Yeah, a weird statement to make, but CSP can be easier to understand at times.
Functions
Now we’re getting into the meat of the interpreter. We can put together all our variables, scopes, and loops, and wrap them into something we can define and call later. Everything starts with a different protocol:
(defprotocol ICallable
(call [self arguments token env]))
And later we can extend it with new types:
(extend-type LoxCallable
ICallable
(call [{:keys [arity function]} arguments token env]
(if (= arity (count arguments))
(apply function arguments)
(runtime-error
(format "Expected %s arguments but got %s." arity (count arguments))
{:token token}))))
(extend-protocol ICallable
Object
(call [self & rest]
(runtime-error "Can only call functions and classes."))
nil
(call [self & rest]
(runtime-error "Can only call functions and classes.")))
Again, I really like this feature of Clojure - defining a new protocol and extending it with already existing types like Object
or nil
, is great.
Same thing with already existing types, such as LoxCallable
(used for native interpreter’s functions) - we can just give it a new protocol and implement it.
No need for visitors, you just do:
(extend-type Function
IInterpretable
(evaluate [{:keys [name] :as self} env]
(let [f (LoxFunction. self env)]
(when name
(swap! env assoc-in [:values (:lexeme name)] f))
f)))
And then:
(defrecord LoxFunction [^Function declaration, closure]
ICallable
(call [{{:keys [params body]} :declaration} args _ _]
(try
(let [env (make-env closure)]
(doseq [[arg val] (map vector params args)]
(swap! env assoc-in [:values (:lexeme (:name arg))] val))
(evaluate body env))
(catch ExceptionInfo e
(let [data (ex-data e)]
(if (= :return (:type data))
(:value data)
(throw e))))))
IStringable
(tostring [self]
(format "#<function: %s>" (->> self :declaration :name :lexeme))))
Every time I read things like visitFunctionStmt
I question myself - what exactly visiting does do, why does it create an object, and so on?
Well, I’m not a Java programmer for a reason, I guess, visitors, abstract fabrics, and other such things don’t tickle my fancy.
Resolving and Binding
I know, I’m being a bit shallow on details, but honestly, I just don’t remember well what was going on in my mind back then. The resolver, however, is another thing that I took the liberty to make immutable, so it’s quite different from the book. I have a giant commit where I basically change this in every protocol method:
(defprotocol ICallable
- (call [self arguments token env]))
+ (call [self arguments token env locals]))
Well, why would I do that? No reason, really. I just noticed that the only place I change locals is in the evaluation entry point and each new scope. However, none of these have to use the mutable state, as it is purely for convenience (and the style the book is written in), so I changed it.
(defn run
([source]
(run source nil))
([source file]
(let [fmt (if file (str file " %s") "%s")
{:keys [errors tokens]} (tokenize source)]
(if (seq errors)
(with-out-err
(doseq [error errors]
(println (format fmt (str error)))))
- (let [expressions (parse tokens)]
- (reset! *locals {})
- (doseq [expr expressions]
- (lox-resolve expr []))
+ (let [expressions (parse tokens)
+ locals (reduce (fn [locals expr]
+ (merge locals (second (lox-resolve expr [[] {}]))))
+ {} expressions)]
(reduce (fn [_ expr]
(when (seq expr)
- (interpret expr)))
+ (interpret expr locals)))
nil expressions))))))
A similar change was needed in the resolve-local
function, as now it would use an additional storage passed in as an argument:
-(defn- resolve-local [expr name scope-stack]
+(defn- resolve-local [expr name [scope-stack locals]]
(when (seq scope-stack)
(loop [i (dec (count scope-stack))]
(if (contains? (get scope-stack i) (:lexeme name))
- (register-expr-scope expr (- (count scope-stack) 1 i))
- (when (> i 0)
- (recur (dec i)))))))
+ [scope-stack (assoc locals expr (- (count scope-stack) 1 i))]
+ (if (> i 0)
+ (recur (dec i))
+ [scope-stack locals])))))
With the rest of the changes the resolver is fully immutable, but is it worth the effort? Well, kinda - it is much easier to debug, and I was able to walk the code, looking at when the locals appeared in the storage much easier. I could, theoretically, make the same change for the global scope storage, but that would be way too much work, and I was tired of this change already. That would help to deal with the annoyance that the scope storage refers to itself at some point, making the debugger unhappy and cycling infinitely during result inspection.
And yes, later I had to store locals in the volatile!
storage to keep them in the REPL, but not because it is mandatory, I was just lazy to put it into the loop.
Classes
We’re entering the part of the book I don’t like, yay! As you know, I like Clojure, but not really because it is a Lisp. I like Clojure because it is a functional language without a type system burden.
If I were into functional programming and into type systems, I would be probably writing about Haskell or Idris, but I’m writing about Clojure, and more often Lua/Fennel. If I were into Lisp and LISP, I would probably be writing about Common Lisp or Scheme. But I’m into type-less, functional approach so Clojure is a perfect fit for me. Well, maybe Elixir as well, but they’re bringing a type system in, so I guess they’re off track on what makes the language fun. But I digress.
Out of all the features I dislike the most are classes and inheritance, and these are the final chapters of the first half of the book. I dislike classes and inheritance mainly because they are only complicated things in my past experience. I worked on a somewhat large code base in C++ before, and inheritance was a huge pain there.
Other languages that feature classes often over-complicate things as well - look at Java with its visitors, factories, and such. So I tend to dislike classes. To be fair, there are use cases for this paradigm, and there are fine implementations like CLOS or Smalltalk, but I would not try to base everything on classes.
Speaking of this book - why are we already at classes when we didn’t cover any base data structures? There are strings, yes, but where are arrays and records/hash-maps? What about tuples? Some could argue that strings can be used for arrays or their code could be repurposed, but not in our case, as we’re using the host platform’s strings which are immutable. Implementing strings as byte arrays is a thing, but it’s not what the book did, so there are simply no arrays in Lox.
Well, we can implement a linked list with classes, but there’s literally no way to implement an array without explicit access to the memory. I don’t really want to cover the changes in the code base, as there’s nothing you haven’t seen before in this post.
One thing I do want to cover though is the use of classes to implement namespaces. This is covered in the “Challenges” section after the actual chapter:
We have methods on instances, but there is no way to define “static” methods that can be called directly on the class object itself. Add support for them. Use a class keyword preceding the method to indicate a static method that hangs off the class object.
class Math { class square(n) { return n * n; } } print Math.square(3); // Prints "9".
You can solve this however you like, but the “metaclasses” used by Smalltalk and Ruby are a particularly elegant approach. Hint: Make LoxClass extend LoxInstance and go from there.
Saying I dislike this approach is to not say anything. Why would you do that? It’s like stating: “Our language can’t do namespaces but do not worry, we have static methods that provide exactly the same thing, you just do it the ugly way”. Well, it probably makes sense in some languages like Smalltalk, but not everything is Smalltalk.
Take a look at Lua for a second. It doesn’t have namespaces. It doesn’t have classes either. Yet, it does have a way of defining modules - it just uses hash tables for that. And it does the same thing with classes - you use a hash table for that. You don’t have to be Clojure to just use maps!
Then the book proceeds:
Most modern languages support “getters” and “setters”—members on a class that look like field reads and writes but that actually execute user-defined code. Extend Lox to support getter methods. These are declared without a parameter list. The body of the getter is executed when a property with that name is accessed.
Every time I see this, I ask myself “Why?!” - a pointless question.
Look at this code:
class Circle {
init(radius) {
this.radius = radius;
}
area {
return 3.141592653 * this.radius * this.radius;
}
}
var circle = Circle(4);
print circle.area; // Prints roughly "50.2655".
Accessing the area
field we get the area of a circle with the radius of 4
units.
Cool, right?
Now look at this:
class Circle {
init(radius) {
this.radius = radius;
}
area {
while(true) {}
}
}
var circle = Circle(4);
print circle.area; // good luck debugging this
Surely, looking at the circle.area
part of the code you can’t see that there’s a call there.
That’s why we have syntax, mind you!
This “feature” to me looks like a deliberate shot to the foot - the expression power it gives you is equal to not hitting two keys on the keyboard yet you introduce ambiguity to your language.
It was at this point where I stopped even trying to do any of the so-called “Challenges”, but I will have to talk about them later, as I do have more things to say.
Inheritance
Most of the changes in this section are sorely related to how we access methods, fields, and so on, so again, I won’t go into actual code differences. Call me lazy.
I’m glad that the book went for a single inheritance approach though, as nothing gives me a stronger desire to punch someone than seeing a class that inherits from another 16, each of which also inherits from more than three. I have seen this in the C++ codebase I mentioned before. Looking up where the method is defined is hard enough, and this was a C++ written before things like language servers were a thing, so tags were the only way to navigate it in a somewhat intelligent way. Unfortunately, generating tags was slow, and I had to update them frequently, as I made the changes to the code. Often, the jump wasn’t possible because tags were outdated.
So, having a single inheritance is, perhaps, limiting as you can’t go nuts and say that Cat
inherits from both Animal
and Carnivore
, which may be a bummer, I guess, but this is a textbook example, I have yet to see OOP code like that.
Personally, I’ll take single inheritance over multiple inheritance any time.
But I’d rather do no inheritance at all, to be honest, protocols are much more fun to use.
Final thoughts
This post is a total trainwreck. I should have published it in the middle of April 2022 but for some reason, I didn’t. To be fair, I still have a draft of the first version of this post, and it is horrible, so I guess it’s good that I didn’t publish it before. Such posts should be written during the time you spend with the book, not afterward, and especially not after almost two years.
So that’s why I’ll be re-writing the second half of the book in Zig and capturing my thoughts as I go through it! For the second time.
Yes, I already tried this approach, but given that I didn’t know Zig at all back then the resulting code is terrible. Can’t say I’m proud about the Clojure version either, but the Zig version is heaps and bounds worse in all areas. Hopefully, this time around I will be able to do a better job.
I’ve mentioned that I have more things to say about the “Challenges”, waiting for the reader at each chapter’s end. I guess, this will have to wait for the second post, which I’m sure won’t take years to appear. Ahem.
You see, most of the challenges, or should I say, exercises were not really out of the way in the first half of the book. Can’t say so for the second half though - I had major issues going through the book because I completed the challenges. This wasn’t an issue for the first book because most challenges are easy, but this can’t be said for the second half, as we’re no longer in the managed territory. Implementing your bytecode interpreter is much harder than using your host language for the heavy lifting.
Anyway, I hope you found this post interesting enough to read til here. If not, well, that’s fair, I didn’t find it interesting to read too, honestly. But I had to get it out of my system, and hopefully, the second part will be more refined.