Duarte David's Avatar

Duarte David

@duartedavid

Raytracers are my "hello world"s drcd1.github.io he/him

33
Followers
142
Following
24
Posts
18.10.2024
Joined
Posts Following

Latest posts by Duarte David @duartedavid

Even people who just started programming usually realize that they are better and faster by copy-pasting from tutorials/examples/forums, than by asking Claude to generate boilerplate (or "an app") for them.

06.03.2026 21:47 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

From my experience, most programmers I know with a little bit of respect for their profession also hate genAI.

Code is always a liability. More code means more bugs. Why would anyone take their time to build on top of a liability that no one wrote and no one is responsible for.

06.03.2026 21:44 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Screenshot showing skin with micro-occlusion at the top, and micro-shadowing below.

Screenshot showing skin with micro-occlusion at the top, and micro-shadowing below.

I finally wrapped up the second post on the series about micro-shadowing. On this one I go over a basic approach based on a microsurface, and I show results for different materials and lighting conditions.

irradiance.ca/posts/micros...

05.03.2026 21:49 ๐Ÿ‘ 38 ๐Ÿ” 10 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I invented 29% of x86.
vimeo.com/450406346

05.03.2026 05:31 ๐Ÿ‘ 83 ๐Ÿ” 5 ๐Ÿ’ฌ 6 ๐Ÿ“Œ 0
Visibility Buffer and Deferred Rendering in DOOM: The Dark Ages
Visibility Buffer and Deferred Rendering in DOOM: The Dark Ages YouTube video by Graphics Programming Conference

Our GPC 2025 talks are up on YT now!
youtu.be/fXakIV1OFes?...

youtu.be/mvCoqCic3nE?...

05.03.2026 10:29 ๐Ÿ‘ 30 ๐Ÿ” 14 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Needlets are spherically localized, fall off exponentially and really do form a Parseval Tight Frame. It's a mystery why we keep using Spherical Harmonics when those ring something terrible.

arxiv.org/pdf/1508.05406

26.02.2026 06:33 ๐Ÿ‘ 13 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Vulkan releases game engine tutorial

Vulkan releases game engine tutorial

The Vulkan Working Group has published, Building a Simple Game Engine, a new in-depth tutorial for developers ready to move beyond the basics and into professional-grade engine development.

Learn more: www.khronos.org/blog/new-vul...
#vulkan #tutorial #programming #gpu #gameengine

25.02.2026 14:34 ๐Ÿ‘ 244 ๐Ÿ” 37 ๐Ÿ’ฌ 10 ๐Ÿ“Œ 5
Post image

We're excited to announce that the slides and videos from the inaugural Shading Languages Symposium are now available! Catch up on all the proceedings and join us next year!

www.khronos.org/events/shadi...
#shading #shaders #programming #Slang #GLSL #HLSL #SPIR-V #glslang #WEST #WGSL #OSL #Gigi

24.02.2026 18:57 ๐Ÿ‘ 2 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1
Image shows a code example for the Python package introduced in this post.

The code is as following (for screenreaders):
import mitsuba_scene_description as msd
import mitsuba as mi

mi.set_variant("llvm_ad_rgb")

# Define components
diffuse = msd.SmoothDiffuseMaterial(reflectance=msd.RGB([0.8, 0.2, 0.2]))
ball = msd.Sphere(
    radius=1.0,
    bsdf=diffuse,
    to_world=msd.Transform().translate(0, 0, 3).scale(0.4),
)
cam = msd.PerspectivePinholeCamera(
    fov=45,
    to_world=msd.Transform().look_at(
        origin=[0, 1, -6], target=[0, 0, 0], up=[0, 1, 0]
    ),
)
integrator = msd.PathTracer()
emitter = msd.ConstantEnvironmentEmitter()

# builder pattern
scene = (
    msd.SceneBuilder()
    .integrator(integrator)
    .sensor(cam)
    .shape("ball", ball)
    .emitter("sun", emitter)
    .build()
)

# or 
scene = msd.Scene(
    integrator=integrator,
    sensors=cam,  # also accepts a list for multi-sensor setups
    shapes={"ball": ball},
    emitters={"sun": emitter},
)

mi.load_dict(scene.to_dict())
# will return:
{'ball': {'bsdf': {'reflectance': {'type': 'rgb', 'value': [0.8, 0.2, 0.2]},
                   'type': 'diffuse'},
          'radius': 1.0,
          'to_world': Transform[
  matrix=[[0.4, 0, 0, 0],
          [0, 0.4, 0, 0],
          [0, 0, 0.4, 1.2],
          [0, 0, 0, 1]],
  ...
],
          'type': 'sphere'},
 'integrator': {'type': 'path'},
 'sensor': {'fov': 45,
            'to_world': Transform[...],
            'type': 'perspective'},
 'sun': {'type': 'constant'},
 'type': 'scene'}

Image shows a code example for the Python package introduced in this post. The code is as following (for screenreaders): import mitsuba_scene_description as msd import mitsuba as mi mi.set_variant("llvm_ad_rgb") # Define components diffuse = msd.SmoothDiffuseMaterial(reflectance=msd.RGB([0.8, 0.2, 0.2])) ball = msd.Sphere( radius=1.0, bsdf=diffuse, to_world=msd.Transform().translate(0, 0, 3).scale(0.4), ) cam = msd.PerspectivePinholeCamera( fov=45, to_world=msd.Transform().look_at( origin=[0, 1, -6], target=[0, 0, 0], up=[0, 1, 0] ), ) integrator = msd.PathTracer() emitter = msd.ConstantEnvironmentEmitter() # builder pattern scene = ( msd.SceneBuilder() .integrator(integrator) .sensor(cam) .shape("ball", ball) .emitter("sun", emitter) .build() ) # or scene = msd.Scene( integrator=integrator, sensors=cam, # also accepts a list for multi-sensor setups shapes={"ball": ball}, emitters={"sun": emitter}, ) mi.load_dict(scene.to_dict()) # will return: {'ball': {'bsdf': {'reflectance': {'type': 'rgb', 'value': [0.8, 0.2, 0.2]}, 'type': 'diffuse'}, 'radius': 1.0, 'to_world': Transform[ matrix=[[0.4, 0, 0, 0], [0, 0.4, 0, 0], [0, 0, 0.4, 1.2], [0, 0, 0, 1]], ... ], 'type': 'sphere'}, 'integrator': {'type': 'path'}, 'sensor': {'fov': 45, 'to_world': Transform[...], 'type': 'perspective'}, 'sun': {'type': 'constant'}, 'type': 'scene'}

G'day!
I've just published a new version of mitsuba-scene-description to GitHub and PyPI: github.com/pixelsandpoi...

I've changed the generation process, so you no longer need to manually clone and build the API yourself. The Mitsuba plugin API will now be generated during package build.

1/x

23.02.2026 10:09 ๐Ÿ‘ 4 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Preview
How a Scientific Visualization PhD Student Became an Award-Winning Sports Journalist - Scientific Computing and Imaging Institute As U.S. sprinter Noah Lyles surged in the final stride to win the menโ€™s 100โ€‘meter dash at the Paris 2024 Olympicsโ€”the closest finish in modern historyโ€”The New York Times graphics team faced a race of ...

The University of Utah wrote a profile about my work.

It covers an award-winning Olympics project I contributed to at The New York Times, my ACM SIGGRAPH leadership, and my research at the Scientific Computing and Imaging Institute.

sci.utah.edu/borkiewicz-n...

20.02.2026 19:07 ๐Ÿ‘ 5 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Recently I've been taking another look at fitting movement model parameters from data and managed to derive some formulations that work in terms of starting and stopping times and distances - something that is potentially a lot more intuitive for designers to use.

theorangeduck.com/page/fitting...

15.02.2026 21:05 ๐Ÿ‘ 12 ๐Ÿ” 5 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0

Thanks! For those interested in how Prince of Persia got made, you can find my published journals, and my graphic novel REPLAY, on my website at jordanmechner.com -- along with archival materials and commentary (some of which is referenced in this article).

10.02.2026 07:59 ๐Ÿ‘ 397 ๐Ÿ” 137 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 4
Preview
SIMD Started It, SIMT Improved It - ACM SIGGRAPH Blog By blending thread abstractions with SIMD hardware, GPUs evolved into flexible processors for graphics, AI, and scientific computing.

Post exploring the evolution of SIMT in GPUS: "SIMD Started It, SIMT Improved It" blog.siggraph.org/2026/01/simd...

05.02.2026 21:13 ๐Ÿ‘ 14 ๐Ÿ” 4 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
The Road to Substance Modeler: VR Roots, Desktop Reinvention I volunteered to join theย  Substance Modeler ย  team less than a year after it came to Adobe, from Oculus. The team was really strong, and it...

Lovely write up from Davide which is thoughtful, well written and interesting on his experience and journey with Modeller.
#SDF #VR #Modelling

dakrunch.blogspot.com/2026/01/the-...

05.02.2026 16:46 ๐Ÿ‘ 5 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image Post image Post image Post image

Marcin Zalewski informed me that his blog post on real-time hair rendering using ray tracing is now available:
mmzala.github.io/blog/hair-ge...
This is nice work, in which he compares NVidia's hardware accelerated hair to several alternatives, including Reshetov's Phantom Ray Hair Intersector.

02.02.2026 15:29 ๐Ÿ‘ 40 ๐Ÿ” 13 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

*sigh* here we go again. Your phone is not listening to everything around you 24/7 for advertising purposes. *If* you have voice activation on for the assistant, the mic is listening for the activation phrase: the processing power for listening for a single specific phrase is much lower.

02.02.2026 01:07 ๐Ÿ‘ 896 ๐Ÿ” 324 ๐Ÿ’ฌ 13 ๐Ÿ“Œ 26

Yeah, I found it rather... inflexible

Plus with blender you can directly put 3D scenes in the sequencer :D

23.01.2026 22:40 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I mean, it is the best (i.e.: only) open source video editor

23.01.2026 22:34 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

For the first time in a while none of the SIGGRAPH papers Iโ€™ll submit have any ML (no neural nets no nothing) in them.

Why ?

The truth is that for most of my use cases those techniques have yet to show any practical benefit over good olโ€™ meshes and math.

22.01.2026 14:28 ๐Ÿ‘ 13 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

its not a direct answer to your question, but in the case that your array represents the cdf of a tabulated function and you want to invert it in order to sample from it, then there is a beautiful o(1) method for *that* called vose/walker's alias method, iirc. not quite your q, but beautiful

19.01.2026 23:24 ๐Ÿ‘ 7 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
Software occlusion culling in Block Game My GPU is the integrated Radeon Vega 8 that comes with my AMD Ryzen 7 5700G CPU. I tell you this so you know that my workstation is not a graphical computing powerhouse. It is, in fact, quite weak. To...

My blog post about how software rendered depth based occlusion culling in Block Game functions is out now! enikofox.com/posts/softwa...

#GameDev #ProcGen ๐ŸŽฎ

16.01.2026 22:10 ๐Ÿ‘ 257 ๐Ÿ” 56 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 2
A 3DCG rendered image of a dark subway corridor

A 3DCG rendered image of a dark subway corridor

Subway Incident by 0b5vr
4KB Executable Graphics
Appeared in Operator Digitalfest 2026

www.pouet.net/prod.php?whi...
www.shadertoy.com/view/lcV3RW

17.01.2026 09:34 ๐Ÿ‘ 37 ๐Ÿ” 14 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 1

Sure. AI companies have ALWAYS been training their models on Wikipedia content, which under the free and open access model is available to anyone โ€” including AI companies. Agreements like these require AI companies to limit and offset the strain they place on Wikimedia infrastructure.

15.01.2026 18:47 ๐Ÿ‘ 4771 ๐Ÿ” 1396 ๐Ÿ’ฌ 40 ๐Ÿ“Œ 140

I am teaching a Computer Graphics class this semester where I am going to pair technical readings with STS-type readings. The students are evaluated on both.

12.01.2026 23:54 ๐Ÿ‘ 12 ๐Ÿ” 2 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 2

Optimizing spatiotemporal variance-guided filtering for modern GPU architectures jcgt.org/published/00...

12.01.2026 22:48 ๐Ÿ‘ 26 ๐Ÿ” 9 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Preview
New Method Is the Fastest Way To Find the Best Routes | Quanta Magazine A canonical problem in computer science is to find the shortest route to every point in a network. A new approach beats the classic algorithm taught in textbooks.

Fascinating! It's taken 40 years, but a group have found a faster shortest path technique than Dijkstraโ€™s algorithm!

#gamedev #indiedev #gamemaker #gamedevelopment

www.quantamagazine.org/new-method-i...

11.01.2026 20:27 ๐Ÿ‘ 55 ๐Ÿ” 13 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 1

cseweb.ucsd.edu/~tzli/novelt...
I gave an internal talk at UCSD last year regarding "novelty" in computer science research. In it I "debunked" some of the myth people seem to have about what is good research in computer science these days. People seemed to like it, so I thought I should share.

09.01.2026 17:21 ๐Ÿ‘ 75 ๐Ÿ” 25 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 2
Video thumbnail

I've prepared a little blog post discussing the ideas behind the new Smooth Walking Mode we developed for UE's Mover plugin. This Movement Mode is also used in the new UE 5.8 GASP release!

theorangeduck.com/page/new-mov...

07.01.2026 17:44 ๐Ÿ‘ 17 ๐Ÿ” 5 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Smart use of AI !

04.01.2026 16:58 ๐Ÿ‘ 10 ๐Ÿ” 2 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Irradiance = power per area (of light coming in from all directions)
Radiosity = power per area (of light going out)
Radiance = power per area per solid angle.

You need irradiance to light diffuse surfaces, radiance to light arbitrary surfaces.

26.12.2025 18:44 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0