I will (suddenly) be at fosdem if you want to meet up!
I will (suddenly) be at fosdem if you want to meet up!
I open settings in an app by a company valued at >$15B. the ui is different. I get a popup: "settings is under construction". audibly sighed out loud.
very fair, genuinely thanks for sharing
what is your honest opinion on bluesky as a twitter ~alt, for less work/tech and more typical social media?
owned this domain before you and didn't renew, glad it went to good use o7
dev here ๐ not yet but definitely interesting in doing it (just annoying to setup)
I used LLRT's Lambda release without the SDK. I tried round-trip latency but couldn't get good measurements, but I'm not an AWS person. I think duration is most important as this is specifically cold starts but happy to follow up.
Thanks! Yes, there are a surprisingly number of use cases and embedded is a big one like any environment where you can't JIT but want better performance then interpreting.
For embedded you can either use Wasm or Porffor's compile to C feature and give that to some proprietary compiler, etc
Yeah afaik that is just AWS overhead
A graph of Porffor cold starts. P50: 16.3ms, P90: 27.5ms, P99: 41.1ms.
My ahead-of-time JS engine Porffor eliminates JS cold starts on AWS Lambda. 12x faster and 2x cheaper than managed Node. Still very early but these results should speak for themselves :)
goose.icu/lambda/
potential outcomes:
- we both waste 5 minutes
- I waste a few hours
- we cry with joy :)
If you have AWS Lambdas with a small amount of (Node)JS please DM me :)
Oh yeah, but I mean it relies on the engine itself having asan etc
Yeah, Fuzzili is very cool but hard to integrate, especially since it is made for being used with coverage and sanitizers which I don't have ๐
I like the idea of using a small model but I think it would heavily limit throughput. It currently runs >1k cases/second and it would probably be much less even with a tiny model running locally ๐
Porffor's graph of Test262 passing over time, currently at 60.30%
Porffor now passes over 60% of Test262, thanks to a new custom regex engine!
Random template generation, only generates valid JS! github.com/CanadaHonk/p...
fuzzing complete ๐งช 78500 | ๐ค 92% | โ 0.0% | ๐ 0.33% | ๐๏ธ 3.7% | ๐ฅ 0.0% | โฐ 3.6% | ๐ 0.0% 3.6% timeout 2.3% CompileError: WebAssembly.Module(): Compiling function #0 failed: expected 2 elements on the stack for branch, found 0 0.68% CompileError: WebAssembly.Module(): Compiling function #0 failed: invalid branch depth: 3 0.39% CompileError: WebAssembly.Module(): Compiling function #0 failed: invalid branch depth: 4294967295 0.23% CompileError: WebAssembly.Module(): Compiling function #0 failed: invalid branch depth: 4 0.23% RangeError: Maximum call stack size exceeded 0.10% RuntimeError: memory access out of bounds 0.059% CompileError: WebAssembly.Module(): Compiling function #0 failed: invalid branch depth: 5 0.023% CompileError: WebAssembly.Module(): Compiling function #0 failed: invalid branch depth: 6 0.0025% CompileError: WebAssembly.Module(): Compiling function #0 failed: invalid branch depth: 8
Porffor now has its own fuzzer which abuses the compiler with randomly generated JS code in an attempt to find bugs which would otherwise not be found to help stability!
ECMAScript excitement ๐
Congrats to @goose.icu on advancing the Math.clamp proposal to Stage 2 at TC39 today ๐
Math.clamp(number, min, max) constrains the number to be between the stated range ๐
github.com/tc39/proposa...
Screenshot of a terminal showing: ~/porffor$ node test262 test: 29837/50424 passed - 59.17% (+0.45) (14 to go until 59.2%, 417 to go until 60%) 29837 7425 12893 ๐งช 50424 | ๐ค 29837 (+227) | โ 7425 (-71) | ๐ 12878 (-157) | ๐ 0 | โฐ 15 (+1) | ๐๏ธ 7 | ๐ฅ 262
Porffor now passes over 59% of Test262!
theoretically any two runtimes but that remains to ever been seen/accepted in practice
~/porffor$ node bench/avg.js 10 "node --jitless bench/richards.js" 313 ~/porffor$ node bench/avg.js 10 "./porf bench/richards.js" 272
Porffor's Wasm is now faster than *native* Node JITless at richards.js, an old V8 benchmark!
new improvement is thanks to a new compiler feature and a minor object rewrite I'll post about this weekend. I still have ideas which can make this hopefully almost 2x faster this month!
sample profile: profile.porffor.dev/5903c8d87fb3
from today, just run `porf profile foo.js` and in just a few moments get your own profile link to share
A screenshot of a web flamegraph/profiler UI
introducing porffor profile, an easy to use but detailed profiler, with shareable links!
always getting an error for BHM?
just had a great catch up with @goose.icu on porffor progress. keep an eye on this project ๐ there's so many perf gains to be had from it once it is stable
A Porffor JS REPL: > let foo = new BigInt64Array(4) undefined > foo[0] = 1337n // just a bigint 1337n > foo[0] = 1 // not a bigint, error and ignore Uncaught TypeError: Cannot convert to BigInt > foo[1] = 4294967296n // over 32 bit unsigned limit 4294967296n > foo[2] = 4611686018427387904n // near 64 bit signed limit, over accurate float limit 4611686018427387904n > foo BigInt64Array(4) [ 1337n, 4294967296n, 4611686018427387904n, 0n ]
Porffor now supports BigInt typed arrays: just this (plus some related changes) boosts Test262 passing by over 0.7%!
"test262: 57.02% (+0.27)"
Porffor now passes over 57% of Test262! This latest bump was thanks to a refactor makes functions and methods distinct to be more conformant for this, construction, etc
this fork is less conformant, normal quickjs supports ES2023 :(
new js engine (quickjs fork) just dropped: github.com/lynx-family/...