Or the even terser:
elapsed := time.Since(start)
I will say, that Joda time (and Java 8 time) set the gold standard for me.
date = date.with(next(WEDNESDAY));
Is magic when you need it!
docs.oracle.com/javase/8/doc...
Or the even terser:
elapsed := time.Since(start)
I will say, that Joda time (and Java 8 time) set the gold standard for me.
date = date.with(next(WEDNESDAY));
Is magic when you need it!
docs.oracle.com/javase/8/doc...
Terminal bench feels kinda close in what you are describing: www.tbench.ai/news/announc...
I imagine that it would be a similar eval to the task structure, but with different skills to try the evals
I saw @samwho.dev talking about their voice set up for coding, I think they were suggest Talon or MacWhisper for voice.
Wow, that sounds horrible. Glad to hear itβs over, and looking forward to reading the retro.
Works well in an internal setting, but would probably need to make it into the OTel client libs to be seamless.
When I was doing internal o11y at ClickHouse, for logs weβd mainly rely on the insane compression + SharedMergeTree (aka, blob store backed) engine. Was pretty cheap and pretty compressible.
I reckon something like a query time join could handle this. But youβd need to have pretty opinionated logs
(More thinking aloud, sorry for spam) I guess just some reservoir sampling would work well here. Just make the service have a size quota, rather than log number quota.
Are you thinking head sampling in this case? My worry with tail sampling is always putting pressure on a few hot shards
Probably too easy to make something degen with a few small changes and flood the blob store though.
Wonder if thereβs any scope to upload some of the blobs directly to blob storage with some log collection smarts. So at the very least you donβt saturate the full pipeline e2e with very large payloads. If itβs content addressed storage you probably get most of the way there
PEBCAK is another great one. βProblem Exists Between Chair And Keyboardβ
Literally the rest of the opinion piece is saying that this is unrealistic, given the chart is reporting data from somewhere else?
404 on the Jagex link btw
There is something about being able to _use_ your own product that simply makes it better. I think its one part of the reason iBlocks just did better.
I use GWR reasonably often, and do enjoy getting the emails from the system unchanged from 5 years ago. I can tell because the GWR logo is still the slightly pixelated one because they couldnβt get us the assets for go-live, and no one has complained!
I have very fond memories of working at iBlocks, a tiny company against Fujitsus and Wordlines behemoths, and just winning and delivering against their glacial pace. I think they still run all the data distribution for the industry (fares, routing, timetables via the dtd)
Iβll also say, that there are still TOC run call centres and customer care that needs to exist, despite the automation. Not everyone is familiar, so you need that assistance too. But the case study of how much money got saved is cool! tracsis-iblocks.com/case-studies...
Yeah, there is some matching that makes the booking system line up, but there were a few failed pushes to make it unified that never made it to the end. Too much of the industry was split into parts run by vendors. Too much politicking by RDG and bigger players!
So, really, itβs already been hived off to be a separate company! It was pretty much 3-4 people who built and ran the system, hitting 85-90% automation rate on the claims if I remember.
One of the magical things we could do was automatically link advance purchase tickets to the delayed trains. So part of the system automatically emails customers on delayed advance purchase trains
The delay repay bit of Avanti, GWR, SWR, Southern and ThamesLink (among others!) is all run by one company external to the TOCs. Used to be a tiny company called iBlocks who got bought by Tracisis. I helped build parts of that DR system!
ClickHouse does this (and has for some time!) clickhouse.com/docs/materia...
It scales pretty well, we handle 10s of millions of events per second in our clusters without really having to worry about it! And you can chain them to so A -> B -> C data flows can happen
ClickHouse instances out there, which is perfect for dogfooding the product! We can find the issues that happen at scale and feed that into development.
Itβs a tricky balance, and weβre trying to shift things (slowly). The main problem is if you are used to being able to get onto any ClickHouse instance and do "select from query_log", thatβs the expectation going forwards. There is a secondary benefit though, in that LogHouse is one of the larger
sampling part, because the kubelet had often rotated the logs before weβd got a chance to read them.
ClickHouse stack frames meant we could identify code paths running hotter on deployed versions.
2. Once we had all the wide data going directly to CH via SysEx, we could drop the log levels down so we didnβt have to engage in a CPU arms race. As the blog mentions, we didnβt even get to the
Thereβs two parts to it (IMO)
1. The system tables were definitely going through a lossy transformation of CH -> logs -> CH.
Having the full fidelity there gives us some pretty incredible visibility. Being able to slice a query pattern across all instances and have that be linked to the
Some fun things weβve been up to! clickhouse.com/blog/scaling...
(Obviously biased that I think ClickHouse is the best store for this ;) )