In my latest article, I break down the Philox RNG, which is a counter-based RNG along with its implementation inside PyTorch.
Read it here: blog.codingconfessions.com/p/how-pytorc...
In my latest article, I break down the Philox RNG, which is a counter-based RNG along with its implementation inside PyTorch.
Read it here: blog.codingconfessions.com/p/how-pytorc...
Traditional RNGs, such as Mersenne Twister are hard to parallelize and my mental model was still stuck in that era. However, turns out systems like PyTorch use more modern techniques that are embarrasingly parallel.
I recently explored how DL frameworks leverage the parallel cores in the GPU to generate random numbers in parallel.
In my latest article, I talk about this in detail. The article shows the Linux kernel code which handles system calls and breaks down the performance implication of each step the kernel takes.
Read it here: blog.codingconfessions.com/p/what-makes...
More importantly, they also make the user space code slow after return from the kernel, and this can be more costly. This happens due to the loss of the microarchitectural state of the process, such as instruction pipeline getting drained, branch predictor buffers getting flushed, etc.
System calls are the interface between the user space and the kernel. They are needed for fundamental things like reading a file, and making a network call. But they are also very expensive because they need to do things like saving/restoring registers, page table, stack.
What Makes System Calls Expensive: A Linux Internals Deep Dive.
blog.codingconfessions.com/p/what-makes...
Another great post by @abhi9u.bsky.social. I learned a lot, including vDSO.
Errata-2: Increased penalty on cache misses and branch misses increases the CPI (cycles per instruction) and lowers IPC.
The article has all the details correctly explained.
Errata-1: I meant to write the law as performance = 1 / (instruction count x cycles per instruction x cycle time).
Read it here: blog.codingconfessions.com/p/one-law-to...
In my latest article, I show this connection with the help of several common optimizations:
- loop unrolling
- function inlining
- SIMD vectorization
- branch elimination
- cache optimization
If you are someone who thinks about performance, I think you will find it worth your time.
Reasoning about microarchitectural code optimizations also works very similarly. Each optimization affects one or more of these factors:
- number of instructions executed (dynamic instruction count)
- instructions executed per cycle (IPC)
- CPU clock cycle
As an example: when increasing the depth of the instruction pipeline, CPU architects can use this law to decide how much can they increase the depth. Increasing the depth reduces the cycle time, but it indirectly increases the IPC as cache miss and branch miss penalties become more severe.
The Iron Law is used by CPU architects to analyse how a change in the CPU microarchitecture will drive the performance of the CPU.
The law defines CPU performance in terms of three factors:
Performance = 1 / (instruction count x instructions per cycle x cycle time)
This is where you wish for a theory to systematically reason about the optimizations so that you know why they work and why they don't work. A framework that turns the guesses into surgical precision.
Does something like this exist? Maybe there is: the Iron Law of performance.
People talk about profiling to find bottlenecks, but no one explains how to fix them. Many devs shoot in the dark and when the optimization backfires, they aren't sure how to proceed.
Skipping computer architecture was my biggest student mistake. Hardwareβs execution model dictates memory layout, binaries, compiler output, and runtimes. If you want to build systems, learn how the CPU works. I wrote an article on this:
blog.codingconfessions.com/p/seeing-the...
If you want a more detailed explanation of what is happening here, and why the `not` operator doesn't need to do the same thing: see my article.
blog.codingconfessions.com/p/python-per...
Hereβs a fun visual showing how len() finds the length of a list in Python.
The size is stored right inside the object, but len() takes a five-pointer detour, only to land back where it started.
This is why if not mylist is ~2x faster for emptiness checks!
My article is written with zero assumption of any prior background in computer architecture. It uses real-world analogies to explain how the CPU works and why these features exist. And gives code examples to put all of that knowledge into action.
blog.codingconfessions.com/p/hardware-a...
A mispredicted branch usually wastes 20-30 cycles. So you don't really want to be missing branches in a critical path. For that you need to know how the branch predictors work and techniques to help it predict the branches better.
In the article you will find a couple of example on how to do that
If the prediction is right, your program finishes execution faster.
If the prediction is wrong, then the speculative work goes wasted and the CPU has to start from scratch to fetch the right set of instructions and execute them.
Waiting until that happens means the instruction pipeline is not doing anything, which wastes resources.
To avoid this wastage of time and resource the CPU employs branch predictors to predict the branch direction and speculatively execute those instructions.
Finally, branch prediction.
All real-world code is non-linear in nature, it consists of branches, such as if/else, switch case, function calls, loops, etc. The execution flow depends on the branch result and that is not known until the processor executes it.
To build a performant system, you need to write code which takes advantage of how the caches work.
In the article I explain how caches work and how can you write code to take advantage of caches to avoid the penalty of slow main memory access.
E.g. an instruction could be finish in 1 cycle but often its operands are in main memory and retrieving them from there takes hundreds of cycles. To hide this high latency, caches are used. Fetching from L1 cache takes only 3-4 cycles.
While ILP enables the hardware to execute several instructions per cycle, those instructions often are bottlenecked on memory.
But it is only possible when your code provides enough independent instructions to the hardware. Usually compilers try to optimize for it, but sometimes you need to do it yourself as well.
In the article I explain how ILP works with code examples to demonstrate how you can take advantage of it.
Instruction level parallelism (ILP) enables the processors to execute multiple instructions of your program in parallel, which results in better utilization of execution resources and faster execution of your code.
In my latest article I have described the mindset that is required to achieve this. I explain the three key architectural features of modern CPUs which is responsible for the performance of most of our code: instruction level parallelism, caches, and branch prediction.