Kho tháng 12/2024
Wed Dec 18 10:28:12 PM CET 2024
I like .md2 format already
It's used for model and animation in Quake II. It was intimidating at first, but it turns out dead simple.
Basically you have a list of vertices and their normals. One per frame because it's quite hard to share vertices when your model moves around. Then you have a list of texture coordinates. These of course can be shared. Then you have a number of frames, each contains a list of indexes of vertices and texture coordinates so you can start drawing triangles.
And that's pretty much it! There's a section for more optimized drawing, using both triangle strips and fans to save draw commands. But since D3D10+ stops supporting triangle fans (and therefore SDL3 GPU does not either), this is no-go.
We probably can still optimize if we want though, by detecting duplicate vertices (with attributes) then draw with index instead. Not sure how much saving that is. But it sure is interesting to find out.
Doing all this made me wonder if I actually applied texture back in glBomb because the models are just red most of the time. Turns out the texture is there, but mild because the whole thing is zoomed out. And then we keep flashing red, not sure why. For flashy? The next version won't be so flashy, and perhaps will have shadow!
Tue Dec 17 11:18:50 PM CET 2024
Quake 2 "md2" format
Finally got over lighting (the simplest one) and started to look at rendering something nicer than a cube. Finally looked at cone3dmd2.cc in glBomb.
I don't think I ever looked inside this code before? MD2 looks more like gif because it stores multiple "frames", complete vertices for each frame. I suppose it's easier to animate this way (than dealing with skeleton animation, which is a completely different beast). But it also means the naive plan to move to the modern gltf format is not going to work out well because ...
Ah but gltf does seem to support animation, so perhaps simply converting md2 to gltf would do the trick? Oii who cares. Just render it with md2 first. Convert it to gltf and do it again!
PS. Moving again?
Sun Dec 15 05:11:44 PM CET 2024
Misunderstanding fragment shader
Let's say we draw a triangle (will I ever draw anything more than a triangle?), we have vertex attributes for 3 vertices of course and the vertex shader produces three vertices. Then some information from the vertex shader (most likely UV texture coordinates) are passed along to the fragment shader.
Here I actually thought the fragment shader was run three times, once per vertex, and all other pixels are "interpolated" somehow.
Which is rather limited in what a fragment shader can do. The reality, it seems, is that the fragment shader would be run for all rasterized pixels (i.e. the entire triangle after mapped to window space) and it's given coordinate for each pixel in gl_FragCoord. And we could do whatever with it.
This just means we essentially have a canvas to draw on. Except that we draw all pixels in parallel, so there can't be any dependencies between pixels in the drawing code.
The remaining question is, how come 3 vertext shader outputs are "attached" to a zillion fragment shaders (one per pixel)? It seems all vertex output would be interpolated. How it's done can be controlled via interpolation qualifiers.
Fri Dec 13 05:13:01 PM CET 2024
I SDL3_GPU now!
Still trying to avoid going back to the lighting lesson in OpenGL. So after Vulkan, let's try Vulkan again, but this time less painful with SDL3's GPU abstraction.
It's not bad. But to be fair, nothing can be bad after the Vulkan experience. Some concept stays, like pipeline creation. Buffer management is SDL's business, but you still have to say where the buffer should be and whether CPU or GPU can access it.
Data transfer is also more explicit like Vulkan. Create a buffer that CPU can see, map it, copy data to it, setup a copy pass to send commands to copy it to a GPU buffer. Doing this in Vulkan is more involved because you also have to explicitly transition the destination buffer.
After that it looks slightly more OpenGL in the rendering loop, probably because it's greatly simplified. You have to bind the pipeline of course. Then instead of binding descriptor sets(?) you can just bind the vertex buffers (haven't touched texture, or uniform buffers yet). Then draw and submit.
The core code looks rather short and sweet
SDL_GPUCommandBuffer* cmdBuf = SDL_AcquireGPUCommandBuffer(device);
SDL_AcquireGPUSwapchainTexture(cmdBuf, window, &swapchainTexture, nullptr, nullptr);
SDL_BeginGPURenderPass(cmdBuf, &colorTargetInfo, 1, nullptr);
SDL_BindGPUGraphicsPipeline(renderPass, pipeline);
SDL_BindGPUVertexBuffers(renderPass, 0, &binding, 1);
SDL_DrawGPUPrimitives(renderPass, 3, 1, 0, 0);
SDL_EndGPURenderPass(renderPass);
SDL_SubmitGPUCommandBuffer(cmdBuf);
Since we're not going to reuse command buffers (and other stuff), fencing seems less important. SDL will just have to handle all that. Though if you have multiple command buffers with some dependency, then yeah fences will become important.
Doesn't look like SDL3 has pipeline cache yet, which is another pain point (and strong point?) of Vulkan.
Overall not bad. 200 lines for a triangle. Though I still need to try out uniform / storage buffers, texture and compute shader. Wonder why geometry shader is not here, perhaps compute shader can take care of it.
Wed Dec 11 08:05:53 PM CET 2024
On 3D rendering pipelines
One of the questions I had when revisiting all the exciting shaders and pipelines is, do we have one big pipeline (and shaders) that renders everything, or is it done in a series of draw calls? And if it's the latter, how does it even work if the first drawing "steps on" the second one?
The two are actually related. Turns out, it's probably impractical to just have one giant vertex shader to do everything. It's probably possible, but you would need to have a bunch of different branches to do different stuff, not exactly sure if you can really take advantage of GPU.
So multiple pipelines, multiple draw calls and the second question. It turns out I forgot about depth testing. When the GPU "draws" a pixel, it also keeps track of depth information for that pixel. So if it draws another pixel with a "farther" depth, then the new pixel can be ignored because we show the closer one.
This pretty much allows us to do multiple, independent draw calls to render different objects and still have things correctly displayed. The only thing that has to be drawn in a specific order, it seems, is objects with blending, where the object behind can affect the one in front of it.
PS. Almost through the Vulkan Guide. Probably will only do the texture chapter then stop. The current vulkan.cc is 2000 lines long, unstructured and it's probably a nightmare to try reorgnize it in order to draw proper GLTF meshes. I'll probably will stick to OpenGL for now (for glBomb, it's in the name after all) then maybe one day migrate to SDL3 to take advantage of Vulkan.
One thing I also didn't realize until studying Vulkan is the rasterization phase happens before fragment shaders. Makes sense though.
It's been nice knowing Vulkan. It's a nightmare experience. Definitely not going to try managing descriptor sets and pipelines by myself.
Fri Dec 6 05:04:57 PM CET 2024
Wind and Truth is released. Decision. Decision.
Buy or not? My reading pace has slowed down significantly (and probably not going to finish The Light Fantastic this year). Rhythm of War already felt like a drag. And this one is only 100 pages longer!
Perhaps I'll try the sample and see how it goes. 15 bucks aren't that much...
PS. Didn't leave the apartment for a week and forgot the door code. What's wrong with you, brain?