Blazing fast servers with Purescript and ES4X!

I was investigating running Purescript servers on ES4X and wrote simple hello world apps in (Purescript+Node), (Purescript+ES4X), and (Haskell+Wai).

I’ll let the numbers speak for themselves -

PURESCRIPT NODE - ~11K req/sec

➜  ~ wrk -t2 -c100 -d1m -R140000 http://127.0.0.1:8080
Running 1m test @ http://127.0.0.1:8080
  2 threads and 100 connections
  Thread calibration: mean lat.: 4963.568ms, rate sampling interval: 16941ms
  Thread calibration: mean lat.: 4963.356ms, rate sampling interval: 16941ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    32.18s    13.56s    0.92m    56.37%
    Req/Sec     5.36k   252.00     5.61k    50.00%
  647384 requests in 1.00m, 148.17MB read
Requests/sec:  10789.76
Transfer/sec:      2.47MB

HASKELL WAI - ~120K requests/sec

➜  ~ wrk -t2 -c100 -d1m -R140000 http://127.0.0.1:8080
Running 1m test @ http://127.0.0.1:8080
  2 threads and 100 connections
  Thread calibration: mean lat.: 878.842ms, rate sampling interval: 2885ms
  Thread calibration: mean lat.: 878.826ms, rate sampling interval: 2885ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.02s     1.86s    8.56s    59.83%
    Req/Sec    60.48k     1.47k   61.61k    82.35%
  7187914 requests in 1.00m, 1.15GB read
Requests/sec: 119799.19
Transfer/sec:     19.65MB

PURESCRIPT ES4X - ~140K requests/sec

➜  ~ wrk -t2 -c100 -d1m -R140000 http://127.0.0.1:3000
Running 1m test @ http://127.0.0.1:3000
  2 threads and 100 connections
  Thread calibration: mean lat.: 1109.463ms, rate sampling interval: 2557ms
  Thread calibration: mean lat.: 1108.048ms, rate sampling interval: 2555ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   436.74ms  457.79ms   1.46s    77.52%
    Req/Sec    71.49k     2.84k   75.28k    73.68%
  8382538 requests in 1.00m, 415.70MB read
Requests/sec: 139709.57
Transfer/sec:      6.93MB

This seems amazing! Has anyone else tried this before?

11 Likes

I never heard of this runtime before, but this is totally insane! Never before have I thought that a JavaScript runtime can get this fast, AND it can interoperate with any language that GraalVM supports! I’ll do some (probably unscientific) benchmarks of my Project Euler code (I’ve been stumped on speeding up a few of the algorithms) and report back with the results soon.

2 Likes

For reference, my code to run a Purescript web server on top of es4x is here - https://github.com/ajnsit/purescript-es4x

3 Likes

This project to run Purescript natively on Graal seems very interesting but long abandoned. Has anyone tried to get it working again? I wonder what the performance would be like.

1 Like

Here I was expecting a discussion about https://en.wikipedia.org/wiki/ECMAScript#4th_Edition_(abandoned) :laughing:

I’ve long wanted to get a proper GraalVM interpreter working, going so far as to write codecs and folds for CoreFn data types in Java, but never found the time or energy to see it through. I think it would have a lot of potential.

3 Likes

Here are some CoreFn codecs/folds for anyone interested: https://gist.github.com/natefaubion/398fcd7fa4c9415e8950235b7d15f113

3 Likes

Interesting! Curious though, how come the transfer/sec is lower than Haskell’s? Not erroring out hopefully?

1 Like

The Haskell server is sending more headers. It shouldn’t make a lot of difference to the numbers though.

Haskell

✗ curl -i http://localhost:8080
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Date: Fri, 25 Dec 2020 18:34:59 GMT
Server: Warp/3.3.13
Content-Type: text/plain; charset=utf-8

Hello, World!%

ES4X

✗ curl -i http://localhost:3000
HTTP/1.1 200 OK
content-length: 13

Hello, World!%
2 Likes

The Haskell server is sending more headers. It shouldn’t make a lot of difference to the numbers though.

I appreciate the intent here; but I don’t know that that’s true; for a “hello world” server the processing is effectively a no-op; so it’s likely that the performance is IO bound, which would imply that tripling the number of bytes (154 chars for WAI, 50 chars for ES4X) is definitely relevant.

The 120,000 req/sec is impressive on its own, so be proud of that; but I do take issue with the implication that it’s “faster” than WAI when ES4X doing approx. 1/3rd of the work that WAI is :slightly_smiling_face:

In any real workload these extra headers most likely amortize away into nothing; but in a micro-benchmark they’re important. Just another argument against micro-benchmarks I suppose :stuck_out_tongue:

4 Likes

Well I’d have more reason to be proud of Haskell’s perf since I wrote the Haskell web framework I used for the benchmark, whereas purescript-es4x is only a small shim over es4x itself!

This is more about having a JS runtime be fast at all and having atleast comparable performance to compiled Haskell.

Just to have a bit more fair comparison, I added a bunch of text to the response from ES4X -

➜  ~ curl -i http://localhost:8080
HTTP/1.1 200 OK
content-length: 117

Transfer-Encoding: chunked
Date: Tue, 29 Dec 2020 13:01:24 GMT
Content-Type: text/plain; charset=utf-8
Hello, World!%%

And here are the results. Still a little bit faster than Haskell at 125k req/sec -

➜  ~ wrk -t2 -c100 -d1m -R140000 http://127.0.0.1:8080
Running 1m test @ http://127.0.0.1:8080
  2 threads and 100 connections
  Thread calibration: mean lat.: 447.469ms, rate sampling interval: 1689ms
  Thread calibration: mean lat.: 448.286ms, rate sampling interval: 1692ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.64s     1.37s    6.17s    58.93%
    Req/Sec    62.86k     1.68k   63.88k    91.38%
  7534584 requests in 1.00m, 1.10GB read
Requests/sec: 125576.96
Transfer/sec:     18.80MB
1 Like