Christoph Lutz's Avatar

Christoph Lutz

@christophlutz

Those who don't jump will never fly. https://www.0xChris.dev

140
Followers
83
Following
191
Posts
29.10.2024
Joined
Posts Following

Latest posts by Christoph Lutz @christophlutz

Preview
The inner workings of TCP zero-copy

blog.tohojo.dk/2026/02/the-...

04.03.2026 12:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Adaptive Scalable LGWR Mode Switch Threshold Single->Scalable Oracle fundamentally reworked and redesigned the LGWR architecture in 12c. In earlier versions, LGWR ran as a single process, whereas Oracle 12c and later can dynamically switch between a single LGWR ...

When does lgwr transition from single to scalable mode?

t.ly/Ohe7a

03.03.2026 10:55 πŸ‘ 2 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

Everybody should get rid of tnsnames.ora and use Easy Connect instead. Change my mind.

13.02.2026 20:49 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Mini-blog series: Oracle Data Guard 26ai new features Oracle AI Database 26ai is here for Linux x86_64β€”packed with new features for Data Guard! Stay tuned for quick blog posts on key changes.

Just discovered this new blog series by @ludovico.bsky.social explaining Data Guard enhancements in 26ai - must readπŸ‘‡

www.ludovicocaldara.net/dba/dg-26ai-...

10.02.2026 03:53 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

I felt like taking a closer look at this new datapatch feature and ended up making some interesting discoveries πŸ™‚

t.ly/Tufob

12.01.2026 10:42 πŸ‘ 7 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Messe Basel Eines der jΓΌngeren Wahrzeichen der Stadt ist der Neubau der Messe Basel. Das zentrale architektonische und stΓ€dtebauliche Element des von Herzog & de Meuron entwickelten Hallenkomplexes ist die City L...

Yeah, it looks like πŸ˜€https://www.basel.com/de/attraktionen/messe-basel-d184427748

05.01.2026 11:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Glad I’m already on vacation and don’t have to deal with this drama anymore this year 😜

16.12.2025 18:57 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
From profiling to kernel patch: the journey to an eBPF performance fix | Ritesh Oedayrajsingh Varma A story about how an innocent profiling session led to a change to the Linux kernel that makes eBPF map-in-map updates much faster.

rovarma.com/articles/fro...

14.12.2025 15:35 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Woot! By a happy twist of fate, a ticket for the 39th Chaos Communication Congress came may way ... πŸ˜€ See you in Hamburg! @ccc.de

Thanks @krischan.bsky.social!

10.12.2025 08:51 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Not needed, bottom line is quite simple: interruptions kill your focus. Some people won't understand πŸ€·β€β™‚οΈ

09.12.2025 21:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
The Math of Why You Can't Focus at Work Interruptions, recovery time, and task size: three numbers that determine if you'll get real work done. Interactive visualizations show the math behind bad days.

justoffbyone.com/posts/math-o...

28.11.2025 11:28 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

#POUG2026 dress code 😜

www.galaxus.ch/en/page/woul...

@pougorg.bsky.social

23.11.2025 21:25 πŸ‘ 8 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

12/11
Internally, the write_sz is stored in structures used by Pipelined Log Writes (Overlapped Redo Writes, OLRW). This makes me wonder if the write threshold was changed in 19.22 when Pipelined Log Writes were first introduced.

05.11.2025 06:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

11/11
On Exadata X10+, Pipelined Log Writes make the threshold even more dynamic as the write_sz adapts continuously when lgwr is running in parallel, depending on how many lg workers are active and whether they are operating in thin or thick mode (a topic for another day).

05.11.2025 06:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

10/11
This behavior can be observed (and changed) with gdb - highly experimental (t.ly/lqdJ0)!

Examples:

05.11.2025 06:50 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

9/11
If only one or a few strands are active at gather time, wr_thresh may be larger than the total size of all active strands. In that situation, a session never stalls to signal lgwr, unless a strand completely fills up and a "log buffer space" wait occurs.

05.11.2025 06:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

8/11
So interestingly, the "1/3 of log buffer full" rule only applies when the capacity per public strand is <= 1 MB and if all strands are active at gather time!

05.11.2025 06:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

7/11
The stall size (also measured in redo blocks/buffers) defaults to the smaller of "1 MB worth of redo blocks" or "1/3 of a strand's capacity in redo blocks":

stall_sz = least(1 MB/redo_block_size, strand_size/redo_block_size/3)

05.11.2025 06:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

6/11
write_sz is derived from a per-strand stall size (explained in more detail below) and computed as:

write_sz = max_strands * stall_sz

So, write_sz is the aggregate across all strands, the wr_thresh, however, is per strand.

05.11.2025 06:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

5/11
More importantly, the "start write threshold" also depends on the number of active public redo strands at gather time and defaults to:

single strand : wr_thresh = (write_sz * poke_pct/100)
multiple strands: wr_thresh = (write_sz * poke_pct/100) / actv_strands

05.11.2025 06:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

4/11
The "start write threshold" is computed based on the write size (explained below) and the value of parameter _target_log_write_size_percent_for_poke (which defaults to 100).

05.11.2025 06:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

3/11
When a session allocates buffers in a public strand, it checks the "start write threshold" (kcrfw_redo_gen_ext). If <= 0 (it can go negative), the session "stalls" to signal lgwr to flush. The threshold is measured in redo buffers and decremented for each buffer allocated.

05.11.2025 06:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2/11
Before lgwr issues a redo write, it gathers the redo buffers from the public redo strands and computes a "start write threshold" (in kcrfw_gather_lwn). In kcrfa traces, this threshold appears as start_wr_thresh_kcrfa_client.

05.11.2025 06:49 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

1/11
Everything changes... turns out the age-old rule that lgwr writes out the log buffer when it's 1/3 full no longer applies in recent Oracle versions.

Observations below from 19.26 (with RAC on Exadata). πŸ‘‡

05.11.2025 06:48 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

10/11
This behavior can be observed (and changed) with gdb - highly experimental (t.ly/lqdJ0)!

Examples:

05.11.2025 06:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

9/11
If only one or a few strands are active at gather time, wr_thresh may be larger than the total size of all active strands. In that situation, a session never stalls to signal lgwr, unless a strand completely fills up and a "log buffer space" wait occurs.

05.11.2025 06:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

8/11
So interestingly, the "1/3 of log buffer full" rule only applies when the capacity per public strand is <= 1 MB and if all strands are active at gather time!

05.11.2025 06:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

7/11
The stall size (also measured in redo blocks/buffers) defaults to the smaller of "1 MB worth of redo blocks" or "1/3 of a strand's capacity in redo blocks":

stall_sz = least(1 MB/redo_block_size, strand_size/redo_block_size/3)

05.11.2025 06:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

6/11
write_sz is derived from a per-strand stall size (explained in more detail below) and computed as:

write_sz = max_strands * stall_sz

So, write_sz is the aggregate across all strands, the wr_thresh, however, is per strand.

05.11.2025 06:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

5/11
More importantly, the "start write threshold" also depends on the number of active public redo strands at gather time and defaults to:

single strand : wr_thresh = (write_sz * poke_pct/100)
multiple strands: wr_thresh = (write_sz * poke_pct/100) / actv_strands

05.11.2025 06:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0