Streaming and receiving chunked multipart data with fnl-http
I’ve been absent for a while - you may have noticed that compared to the previous year, I posted a lot less this time. There are two closely related reasons for that. First, I felt burned out from programming. Second, I finally picked up a guitar after five or more years and started recording again. Thus, now I only do hobby programming when I feel passionate or challenged in an entertaining way. This works for me, but I have less material to write about.
One such passionate project for me this year is fnl-http.
It’s an HTTP/1.1
client and server implementation for Fennel made using luasocket and async.fnl - another library of mine.
Last time I posted, I was mostly talking about testing. And I’m glad that I wrote all these tests, because the change I’m going to talk about this time broke a lot of them, and I was able to diagnose problems much quicker. But the main changes are related to the post I made before that: fnl-http Improvements. In that post, the most exciting things for me were chunked transfer encoding and support for multipart requests. I’ve implemented support for reading chunked responses (the client already could send chunked requests) and moved onto implementing multipart requests.
HTTP Server
For some time, the implementation was sufficient enough, so I started working on an POC asynchronous HTTP server. The server is pretty much bare-bones right now, though it features everything you need to implement a proper one. Here’s how it can be used:
(local server (require :io.gitlab.andreyorst.fnl-http.server))
(local json (require :io.gitlab.andreyorst.fnl-http.json))
(fn handler [{: headers &as request}]
{:status 200
:headers {:connection (or headers.Connection :keep-alive)
:content-type :application/json}
:body (json request)})
(with-open [s (server.start handler {:port 3000})]
(s:wait))
It’s a rather simple echo server, that sends back the parsed request as JSON. Let’s try it:
>> (local http (require :io.gitlab.andreyorst.fnl-http.client))
nil
>> (http.get "localhost:3000" {:as :json})
{:body {:headers {:Host "localhost:3000"}
:method "GET"
:path "/"
:protocol-version {:major 1 :minor 1 :name "HTTP"}}
:headers {:Connection "keep-alive"
:Content-Length "131"
:Content-Type "application/json"}
:http-client #<tcp-client: 0x561839afe390>
:length 131
:protocol-version {:major 1 :minor 1 :name "HTTP"}
:reason-phrase "OK"
:request-time 13
:status 200
:trace-redirects {}}
Now, of course, this won’t work as is, because not all Lua values can be encoded as JSON. Such examples are functions, and custom objects, like Readers. We can cheat, though, and encode it as a Fennel table and send it back as a string:
(local {: view} (require :fennel))
(fn handler [{: headers &as request}]
{:status 200
:headers {:connection (or headers.Connection :keep-alive)
:content-type :application/json}
:body (view request)})
Now we can try a POST request:
>> (http.post "localhost:3000" {:body "Hello HTTP"})
{:body "{:content #<Reader: 0x55eeb3016fc0>
:headers {:Content-Length \"10\" :Host \"localhost:3000\"}
:length 10
:method \"POST\"
:path \"/\"
:protocol-version {:major 1 :minor 1 :name \"HTTP\"}}"
:headers {:Connection "keep-alive"
:Content-Length "183"
:Content-Type "application/json"}
:http-client #<tcp-client: 0x55eeb3059470>
:length 183
:protocol-version {:major 1 :minor 1 :name "HTTP"}
:reason-phrase "OK"
:request-time 20
:status 200
:trace-redirects {}}
This should give you an idea of how to implement a proper handler.
The reason the content is a Reader is simple - we don’t want to process any more data than we need when parsing the request.
Once we obtain the HTTP-related data, such as headers, method, path, and version, we wrap the rest of the data in a Reader object, so the handler function can work on it later or even discard it.
In this particular case, it’s as simple as doing (request.content:read (tonumber request.headers.Content-Length))
but it may vary depending on what was sent, and what path the client used.
So I’m leaving this open.
Using a Reader enables asynchronous and lazy processing of the request’s body.
Multipart requests
What’s interesting, however, is that there may be multiple bodies, if the request is a multipart one. We can send a multipart request like this:
(http.post "localhost:3000"
{:multipart [{:name "foo" :content "some string"}
{:name "file" :filename "some-data.json"
:content (io.open "some-data.json")}]})
Ideally, the server should be able to first process the foo
part, and then a file
part.
Additionally, a file can be really big, so we might want to avoid reading it fully into memory.
For example, the JSON module of fnl-http
can parse files without fully reading them into memory, as it is based around readers.
The question, however, is how do we give this to a handler without reading the request in full first? We don’t know how many parts there will be, and we might not know the sizes of each part. Let’s look at how the handler encodes this request:
>> (http.post "localhost:3000"
{:multipart [{:name "foo" :content "some string"}
{:name "file" :filename "some-data.json"
:content (io.open "some-data.json")}]})
{:body "{:headers {:Content-Type \"multipart/form-data; boundary=------------aaec41d0-1c75-4037-a039-91cc2b31584c\"
:Host \"localhost:3000\"}
:method \"POST\"
:parts #<function: 0x55eeb28e79c0>
:path \"/\"
:protocol-version {:major 1 :minor 1 :name \"HTTP\"}}"
:headers {:Connection "keep-alive"
:Content-Length "256"
:Content-Type "application/json"}
:http-client #<tcp-client: 0x55eeb31df9e0>
:length 256
:protocol-version {:major 1 :minor 1 :name "HTTP"}
:reason-phrase "OK"
:request-time 80
:status 200
:trace-redirects {}}
You can see, that there’s no content
field anymore.
Instead, there’s a parts
field, and it is a function.
This function is a standard Lua iterator, that you can call until it returns nil
or plug it in any of the iteration forms.
Let’s change our handler to support both content
and parts
:
(fn handler [request]
(case request
{: content}
(set request.content
(if request.headers.Content-Length
(content:read (tonumber request.headers.Content-Length))
(= request.headers.Transfer-Encoding "chunked")
(content:read :*a)))
{: parts}
(set request.parts
(icollect [part parts]
(doto part (tset :content (part.content:read :*a))))))
{:status 200
:body (json request)})
Let’s look at this a bit closer.
The case
has two branches - one for content
and one for parts
.
The content
branch is pretty straightforward - we replace request’s content
field value with the data obtained from Reader.
If there was a Content-Length header, we used it to determine how much to read.
If not, we (rather crudely) check if the chunked Transfer-Encoding was used, and read all of the data from the Reader.
We can do so, because chunked readers know when to stop, based on the final chunk header.
The parts
branch is similar, except we have to go one level deeper, and it’s a bit simpler to work with the part’s contents.
First, the main function of icollect
is to produce a sequential table from the iterator values.
We do so by replacing the content
field of each part
.
The content
field is again a Reader, but it’s a sized reader, so we can almost always read it in full.
Even if content length was not specified, parts are separated by boundaries, so we always know when to stop.
Each part
is a table of the following structure:
{:content #<Reader>
:headers {...}
:length N ;; optional field, if headers had a Content-Length specified
:name "part name"
:filename "name of the uploaded file"
;; alternatively :filename* can be present instead
:type "part kind"}
As much information is given to make working with parts easier. And because parts are provided by an iterator, we don’t have to know in advance how many parts are coming.
However, there’s a caveat - we can’t obtain information about each part first, and process the contents later. Moving to the next part consumes the request body until the next boundary is given, so we can’t reverse the Reader back to its content. Thus, doing this won’t work:
(let [all-parts (icollect [part parts] part)]
(each [_ part (ipairs all-parts)]
(part.content:read :*a)))
You’ll have to either copy reader contents into another in-memory Reader, or process parts as they appear.
Now, we can try our request again:
>> (http.post "localhost:3000"
{:multipart [{:name "foo" :content "some string"}
{:name "file" :filename "some-data.json"
:content (io.open "some-data.json")}]
:as :json})
{:body
{:headers
{:Content-Type
"multipart/form-data; boundary=------------a03996f2-0d1c-4aa4-97f5-ac90b06cb6ea"
:Host "localhost:3000"}
:method "POST"
:parts [{:content "some string"
:headers {:Content-Disposition "form-data; name=\"foo\""
:Content-Length "11"
:Content-Transfer-Encoding "8bit"
:Content-Type "text/plain; charset=UTF-8"}
:length 11
:name "foo"
:type "form-data"}
{:content "{\"value\": \"JSON data\"}\n"
:filename "some-data.json"
:headers {:Content-Disposition "form-data; name=\"file\"; filename=\"some-data.json\""
:Content-Transfer-Encoding "binary"
:Content-Type "application/octet-stream"
:Transfer-Encoding "chunked"}
:name "file"
:type "form-data"}]
:path "/"
:protocol-version {:major 1 :minor 1 :name "HTTP"}}
:headers {:Connection "keep-alive" :Content-Length "808"}
:http-client #<tcp-client: 0x55cf2d942450>
:length 808
:protocol-version {:major 1 :minor 1 :name "HTTP"}
:reason-phrase "OK"
:request-time 13
:status 200
:trace-redirects {}}
As you can see, the foo
part had length
of 12
, and the file
part had a chunked transfer encoding.
Before this post, there was no way of sending multipart with chunked encoding, so this is a pretty good change in my opinion. Definitively expands the possibilities. Still need to test this with some real HTTP servers, as I don’t see this feature used too often though.
Performance
Now, let’s talk performance. I used the wrk tool to measure the server’s RPS with a simple handler that does 50 millisecond delay on each request:
(local async (require :io.gitlab.andreyorst.async))
(fn handler [{: headers &as request}]
(async.<! (async.timeout 50))
{:status 200
:headers {:connection (or headers.Connection "keep-alive")
:content-length 11
:content-type "text/plain"}
:body "hello world"})
The handler
function is always running in an asynchronous context, so we can use parking operations such as <!
inside of it.
Running it with 12 threads and 400 connections for 30 seconds yields about 2k requests per second:
Running 30s test @ http://localhost:3000
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 146.42ms 94.13ms 2.00s 98.54%
Req/Sec 205.07 90.83 343.00 62.36%
69835 requests in 30.09s, 8.00MB read
Socket errors: connect 0, read 0, write 0, timeout 151
Requests/sec: 2321.21
Transfer/sec: 272.37KB
Not bad for a single-threaded Lua runtime, I would say!
Now, of course, this kind of benchmark is more synthetic than real, so don’t expect it to work as well as it says here in all cases. With a more complex handler, that deals with routing, and data processing this number will go down. Still, I’m pretty satisfied with these results.
And CPU usage is quite low too. When idling a single CPU core is sitting at about 5%, and during the benchmark, it rises to 70%. Removing the asynchronous sleep, raises CPU usage to 100% (single core), peaking at 67MB of RAM and serving about 4k requests per second. So with an asynchronous sleep, our server process is doing some actual sleeping, leaving some resources to the OS!
And given how async.fnl
is implemented, we can actually replace (async.<! (async.timeout 50))
with a blocking loop, like (for [i 1 1_000_000] nil)
and still get around 400 RPS.
That’s a busy loop with one million iterations per handler invocation on each request.
So you shouldn’t worry too much when blocking the handler.
Is it a good result? You be the judge.