Even people who just started programming usually realize that they are better and faster by copy-pasting from tutorials/examples/forums, than by asking Claude to generate boilerplate (or "an app") for them.
Even people who just started programming usually realize that they are better and faster by copy-pasting from tutorials/examples/forums, than by asking Claude to generate boilerplate (or "an app") for them.
From my experience, most programmers I know with a little bit of respect for their profession also hate genAI.
Code is always a liability. More code means more bugs. Why would anyone take their time to build on top of a liability that no one wrote and no one is responsible for.
Screenshot showing skin with micro-occlusion at the top, and micro-shadowing below.
I finally wrapped up the second post on the series about micro-shadowing. On this one I go over a basic approach based on a microsurface, and I show results for different materials and lighting conditions.
irradiance.ca/posts/micros...
I invented 29% of x86.
vimeo.com/450406346
Our GPC 2025 talks are up on YT now!
youtu.be/fXakIV1OFes?...
youtu.be/mvCoqCic3nE?...
Needlets are spherically localized, fall off exponentially and really do form a Parseval Tight Frame. It's a mystery why we keep using Spherical Harmonics when those ring something terrible.
arxiv.org/pdf/1508.05406
Vulkan releases game engine tutorial
The Vulkan Working Group has published, Building a Simple Game Engine, a new in-depth tutorial for developers ready to move beyond the basics and into professional-grade engine development.
Learn more: www.khronos.org/blog/new-vul...
#vulkan #tutorial #programming #gpu #gameengine
We're excited to announce that the slides and videos from the inaugural Shading Languages Symposium are now available! Catch up on all the proceedings and join us next year!
www.khronos.org/events/shadi...
#shading #shaders #programming #Slang #GLSL #HLSL #SPIR-V #glslang #WEST #WGSL #OSL #Gigi
Image shows a code example for the Python package introduced in this post. The code is as following (for screenreaders): import mitsuba_scene_description as msd import mitsuba as mi mi.set_variant("llvm_ad_rgb") # Define components diffuse = msd.SmoothDiffuseMaterial(reflectance=msd.RGB([0.8, 0.2, 0.2])) ball = msd.Sphere( radius=1.0, bsdf=diffuse, to_world=msd.Transform().translate(0, 0, 3).scale(0.4), ) cam = msd.PerspectivePinholeCamera( fov=45, to_world=msd.Transform().look_at( origin=[0, 1, -6], target=[0, 0, 0], up=[0, 1, 0] ), ) integrator = msd.PathTracer() emitter = msd.ConstantEnvironmentEmitter() # builder pattern scene = ( msd.SceneBuilder() .integrator(integrator) .sensor(cam) .shape("ball", ball) .emitter("sun", emitter) .build() ) # or scene = msd.Scene( integrator=integrator, sensors=cam, # also accepts a list for multi-sensor setups shapes={"ball": ball}, emitters={"sun": emitter}, ) mi.load_dict(scene.to_dict()) # will return: {'ball': {'bsdf': {'reflectance': {'type': 'rgb', 'value': [0.8, 0.2, 0.2]}, 'type': 'diffuse'}, 'radius': 1.0, 'to_world': Transform[ matrix=[[0.4, 0, 0, 0], [0, 0.4, 0, 0], [0, 0, 0.4, 1.2], [0, 0, 0, 1]], ... ], 'type': 'sphere'}, 'integrator': {'type': 'path'}, 'sensor': {'fov': 45, 'to_world': Transform[...], 'type': 'perspective'}, 'sun': {'type': 'constant'}, 'type': 'scene'}
G'day!
I've just published a new version of mitsuba-scene-description to GitHub and PyPI: github.com/pixelsandpoi...
I've changed the generation process, so you no longer need to manually clone and build the API yourself. The Mitsuba plugin API will now be generated during package build.
1/x
The University of Utah wrote a profile about my work.
It covers an award-winning Olympics project I contributed to at The New York Times, my ACM SIGGRAPH leadership, and my research at the Scientific Computing and Imaging Institute.
sci.utah.edu/borkiewicz-n...
Recently I've been taking another look at fitting movement model parameters from data and managed to derive some formulations that work in terms of starting and stopping times and distances - something that is potentially a lot more intuitive for designers to use.
theorangeduck.com/page/fitting...
Thanks! For those interested in how Prince of Persia got made, you can find my published journals, and my graphic novel REPLAY, on my website at jordanmechner.com -- along with archival materials and commentary (some of which is referenced in this article).
Post exploring the evolution of SIMT in GPUS: "SIMD Started It, SIMT Improved It" blog.siggraph.org/2026/01/simd...
Lovely write up from Davide which is thoughtful, well written and interesting on his experience and journey with Modeller.
#SDF #VR #Modelling
dakrunch.blogspot.com/2026/01/the-...
Marcin Zalewski informed me that his blog post on real-time hair rendering using ray tracing is now available:
mmzala.github.io/blog/hair-ge...
This is nice work, in which he compares NVidia's hardware accelerated hair to several alternatives, including Reshetov's Phantom Ray Hair Intersector.
*sigh* here we go again. Your phone is not listening to everything around you 24/7 for advertising purposes. *If* you have voice activation on for the assistant, the mic is listening for the activation phrase: the processing power for listening for a single specific phrase is much lower.
Yeah, I found it rather... inflexible
Plus with blender you can directly put 3D scenes in the sequencer :D
I mean, it is the best (i.e.: only) open source video editor
For the first time in a while none of the SIGGRAPH papers Iโll submit have any ML (no neural nets no nothing) in them.
Why ?
The truth is that for most of my use cases those techniques have yet to show any practical benefit over good olโ meshes and math.
its not a direct answer to your question, but in the case that your array represents the cdf of a tabulated function and you want to invert it in order to sample from it, then there is a beautiful o(1) method for *that* called vose/walker's alias method, iirc. not quite your q, but beautiful
My blog post about how software rendered depth based occlusion culling in Block Game functions is out now! enikofox.com/posts/softwa...
#GameDev #ProcGen ๐ฎ
A 3DCG rendered image of a dark subway corridor
Subway Incident by 0b5vr
4KB Executable Graphics
Appeared in Operator Digitalfest 2026
www.pouet.net/prod.php?whi...
www.shadertoy.com/view/lcV3RW
Sure. AI companies have ALWAYS been training their models on Wikipedia content, which under the free and open access model is available to anyone โ including AI companies. Agreements like these require AI companies to limit and offset the strain they place on Wikimedia infrastructure.
I am teaching a Computer Graphics class this semester where I am going to pair technical readings with STS-type readings. The students are evaluated on both.
Optimizing spatiotemporal variance-guided filtering for modern GPU architectures jcgt.org/published/00...
Fascinating! It's taken 40 years, but a group have found a faster shortest path technique than Dijkstraโs algorithm!
#gamedev #indiedev #gamemaker #gamedevelopment
www.quantamagazine.org/new-method-i...
cseweb.ucsd.edu/~tzli/novelt...
I gave an internal talk at UCSD last year regarding "novelty" in computer science research. In it I "debunked" some of the myth people seem to have about what is good research in computer science these days. People seemed to like it, so I thought I should share.
I've prepared a little blog post discussing the ideas behind the new Smooth Walking Mode we developed for UE's Mover plugin. This Movement Mode is also used in the new UE 5.8 GASP release!
theorangeduck.com/page/new-mov...
Smart use of AI !
Irradiance = power per area (of light coming in from all directions)
Radiosity = power per area (of light going out)
Radiance = power per area per solid angle.
You need irradiance to light diffuse surfaces, radiance to light arbitrary surfaces.