Wed Dec 18 10:28:12 PM CET 2024

I like .md2 format already

It's used for model and animation in Quake II. It was intimidating at first, but it turns out dead simple.

Basically you have a list of vertices and their normals. One per frame because it's quite hard to share vertices when your model moves around. Then you have a list of texture coordinates. These of course can be shared. Then you have a number of frames, each contains a list of indexes of vertices and texture coordinates so you can start drawing triangles.

And that's pretty much it! There's a section for more optimized drawing, using both triangle strips and fans to save draw commands. But since D3D10+ stops supporting triangle fans (and therefore SDL3 GPU does not either), this is no-go.

We probably can still optimize if we want though, by detecting duplicate vertices (with attributes) then draw with index instead. Not sure how much saving that is. But it sure is interesting to find out.

Doing all this made me wonder if I actually applied texture back in glBomb because the models are just red most of the time. Turns out the texture is there, but mild because the whole thing is zoomed out. And then we keep flashing red, not sure why. For flashy? The next version won't be so flashy, and perhaps will have shadow!


Tác giả: pclouds | Liên kết tĩnh | Linux

Tue Dec 17 11:18:50 PM CET 2024

Quake 2 "md2" format

Finally got over lighting (the simplest one) and started to look at rendering something nicer than a cube. Finally looked at cone3dmd2.cc in glBomb.

I don't think I ever looked inside this code before? MD2 looks more like gif because it stores multiple "frames", complete vertices for each frame. I suppose it's easier to animate this way (than dealing with skeleton animation, which is a completely different beast). But it also means the naive plan to move to the modern gltf format is not going to work out well because ...

Ah but gltf does seem to support animation, so perhaps simply converting md2 to gltf would do the trick? Oii who cares. Just render it with md2 first. Convert it to gltf and do it again!

PS. Moving again?


Tác giả: pclouds | Liên kết tĩnh | Linux

Sun Dec 15 05:11:44 PM CET 2024

Misunderstanding fragment shader

Let's say we draw a triangle (will I ever draw anything more than a triangle?), we have vertex attributes for 3 vertices of course and the vertex shader produces three vertices. Then some information from the vertex shader (most likely UV texture coordinates) are passed along to the fragment shader.

Here I actually thought the fragment shader was run three times, once per vertex, and all other pixels are "interpolated" somehow.

Which is rather limited in what a fragment shader can do. The reality, it seems, is that the fragment shader would be run for all rasterized pixels (i.e. the entire triangle after mapped to window space) and it's given coordinate for each pixel in gl_FragCoord. And we could do whatever with it.

This just means we essentially have a canvas to draw on. Except that we draw all pixels in parallel, so there can't be any dependencies between pixels in the drawing code.

The remaining question is, how come 3 vertext shader outputs are "attached" to a zillion fragment shaders (one per pixel)? It seems all vertex output would be interpolated. How it's done can be controlled via interpolation qualifiers.


Tác giả: pclouds | Liên kết tĩnh | Linux

Fri Dec 13 05:13:01 PM CET 2024

I SDL3_GPU now!

Still trying to avoid going back to the lighting lesson in OpenGL. So after Vulkan, let's try Vulkan again, but this time less painful with SDL3's GPU abstraction.

It's not bad. But to be fair, nothing can be bad after the Vulkan experience. Some concept stays, like pipeline creation. Buffer management is SDL's business, but you still have to say where the buffer should be and whether CPU or GPU can access it.

Data transfer is also more explicit like Vulkan. Create a buffer that CPU can see, map it, copy data to it, setup a copy pass to send commands to copy it to a GPU buffer. Doing this in Vulkan is more involved because you also have to explicitly transition the destination buffer.

After that it looks slightly more OpenGL in the rendering loop, probably because it's greatly simplified. You have to bind the pipeline of course. Then instead of binding descriptor sets(?) you can just bind the vertex buffers (haven't touched texture, or uniform buffers yet). Then draw and submit.

The core code looks rather short and sweet

SDL_GPUCommandBuffer* cmdBuf = SDL_AcquireGPUCommandBuffer(device);
SDL_AcquireGPUSwapchainTexture(cmdBuf, window, &swapchainTexture, nullptr, nullptr);
SDL_BeginGPURenderPass(cmdBuf, &colorTargetInfo, 1, nullptr);
SDL_BindGPUGraphicsPipeline(renderPass, pipeline);
SDL_BindGPUVertexBuffers(renderPass, 0, &binding, 1);
SDL_DrawGPUPrimitives(renderPass, 3, 1, 0, 0);
SDL_EndGPURenderPass(renderPass);
SDL_SubmitGPUCommandBuffer(cmdBuf);

Since we're not going to reuse command buffers (and other stuff), fencing seems less important. SDL will just have to handle all that. Though if you have multiple command buffers with some dependency, then yeah fences will become important.

Doesn't look like SDL3 has pipeline cache yet, which is another pain point (and strong point?) of Vulkan.

Overall not bad. 200 lines for a triangle. Though I still need to try out uniform / storage buffers, texture and compute shader. Wonder why geometry shader is not here, perhaps compute shader can take care of it.


Tác giả: pclouds | Liên kết tĩnh | Linux

Wed Dec 11 08:05:53 PM CET 2024

On 3D rendering pipelines

One of the questions I had when revisiting all the exciting shaders and pipelines is, do we have one big pipeline (and shaders) that renders everything, or is it done in a series of draw calls? And if it's the latter, how does it even work if the first drawing "steps on" the second one?

The two are actually related. Turns out, it's probably impractical to just have one giant vertex shader to do everything. It's probably possible, but you would need to have a bunch of different branches to do different stuff, not exactly sure if you can really take advantage of GPU.

So multiple pipelines, multiple draw calls and the second question. It turns out I forgot about depth testing. When the GPU "draws" a pixel, it also keeps track of depth information for that pixel. So if it draws another pixel with a "farther" depth, then the new pixel can be ignored because we show the closer one.

This pretty much allows us to do multiple, independent draw calls to render different objects and still have things correctly displayed. The only thing that has to be drawn in a specific order, it seems, is objects with blending, where the object behind can affect the one in front of it.

PS. Almost through the Vulkan Guide. Probably will only do the texture chapter then stop. The current vulkan.cc is 2000 lines long, unstructured and it's probably a nightmare to try reorgnize it in order to draw proper GLTF meshes. I'll probably will stick to OpenGL for now (for glBomb, it's in the name after all) then maybe one day migrate to SDL3 to take advantage of Vulkan.

One thing I also didn't realize until studying Vulkan is the rasterization phase happens before fragment shaders. Makes sense though.

It's been nice knowing Vulkan. It's a nightmare experience. Definitely not going to try managing descriptor sets and pipelines by myself.


Tác giả: pclouds | Liên kết tĩnh | Linux

Fri Dec 6 05:04:57 PM CET 2024

Wind and Truth is released. Decision. Decision.

Buy or not? My reading pace has slowed down significantly (and probably not going to finish The Light Fantastic this year). Rhythm of War already felt like a drag. And this one is only 100 pages longer!

Perhaps I'll try the sample and see how it goes. 15 bucks aren't that much...

PS. Didn't leave the apartment for a week and forgot the door code. What's wrong with you, brain?


Tác giả: pclouds | Liên kết tĩnh | Sách

Tue Nov 26 06:57:08 PM CET 2024

I almost Vulkan now

Losing steam on OpenGL so I thought I'd give Vulkan a try for fresh air. Since I should get the basics down now, it shouldn't be as nightmarish as before.

Boy. Vulkan is verbose. This is not an unpopular opinion. In fact it may be one of the most poplar description of Vulkan. But boy is it verbose. Giving up around swapchain creation and just used vk-bootstrap instead, so I would never know if GPU initialization would take 500 lines or not. Just initialization. vk-bootstrap was a good decision.

Then, even the main loop to just do the equivalent of glClear() is also very verbose. Although most of it seems to be connected to swapchain, so not a very big deal. And explicit synchronization is where Vulkan pays off, supposedly.

That's not even touching shaders. Resource management is also explicit, so it's going to be a wild ride. gl3.cc with all the fancy stuff (and a big vertices array) takes 525 lines. Draw-nothing vulkan.cc is already 497 lines. I miss OpenGL now...


Tác giả: pclouds | Liên kết tĩnh | Linux

Sat Nov 23 11:42:47 AM CET 2024

I OpenGL now

Technically I OpenGL'd twenty years ago. I remember sitting in the library reading about OpenGL (and probably Linux around the same time). Can't quite remember if it was long before the graphics class or not, when I made glBomb.

I wanted to get back in graphics programming for a long time, see what all this "shader" bussiness is all about, did some reading from time to time but never got to do anything for real.

glBomb source code was recovered a few years back, and I've been reviving it for fun, modernizing a bit. That code was 20 years old with OpenGL 1(?), SDL 1, C++98, no STL... It was ugly. But at least it's back somewhat working (well, it was working but harder to work with). Now is the time to look at moving away from the old OpenGL code.

So finally, modern OpenGL. Well, for a rectangle, nothing fancy yet, just to get a feeling.

#include <fstream>
#include <iostream>
#include <vector>

#include <SDL2/SDL.h>
#include <SDL2/SDL_video.h>
#define GL_GLEXT_PROTOTYPES
#include <SDL2/SDL_opengl.h>

GLuint loadShader(GLenum shaderType, const std::string& path)
{
    GLuint shader = glCreateShader(shaderType);
    {
       std::ifstream f(path, std::ios::binary);
       std::vector<char> data((std::istreambuf_iterator<char>(f)),
			      std::istreambuf_iterator<char>());
       glShaderBinary(1, &shader,
		      GL_SHADER_BINARY_FORMAT_SPIR_V,
		      reinterpret_cast<void*>(data.data()),
		      data.size());
    }
    glSpecializeShader(shader, "main", 0, nullptr, nullptr);

    int success;
    glGetShaderiv(shader, GL_COMPILE_STATUS, &success);
    if (success)
    {
       return shader;
    }

    char infoLog[512];
    glGetShaderInfoLog(shader, sizeof(infoLog), nullptr, infoLog);
    glDeleteShader(shader);
    std::cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << std::endl;
    return 0;
}

int main()
{
    SDL_Init(SDL_INIT_VIDEO);

    auto window = SDL_CreateWindow("glBomb",
				   SDL_WINDOWPOS_UNDEFINED,
				   SDL_WINDOWPOS_UNDEFINED,
				   800, 600,
				   SDL_WINDOW_OPENGL);

    SDL_GL_CreateContext(window);

    // Shader
    GLuint shaderProgram = glCreateProgram();

    GLuint vertexShader = loadShader(GL_VERTEX_SHADER, "gl3-vert.spv");
    glAttachShader(shaderProgram, vertexShader);

    GLuint fragmentShader = loadShader(GL_FRAGMENT_SHADER, "gl3-frag.spv");
    glAttachShader(shaderProgram, fragmentShader);

    glLinkProgram(shaderProgram);

    int success;
    glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success);
    if (!success)
    {
       char infoLog[512];
       glGetProgramInfoLog(shaderProgram, sizeof(infoLog), nullptr, infoLog);
       std::cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << std::endl;
       return 1;
    }

    glDeleteShader(vertexShader);
    glDeleteShader(fragmentShader);

    // VAO, VBO, EBO...
    GLuint VBO;
    glCreateBuffers(1, &VBO);

    std::vector<float> vertices
    {
       0.5f,  0.5f, 0.0f,  // top right
       0.5f, -0.5f, 0.0f,  // bottom right
       -0.5f, -0.5f, 0.0f,  // bottom left
       -0.5f,  0.5f, 0.0f   // top left 
    };
    glNamedBufferData(VBO, vertices.size() * sizeof(float), vertices.data(), GL_STATIC_DRAW);

    GLuint EBO;
    glCreateBuffers(1, &EBO);

    std::vector<GLuint> indices
    {
        0, 1, 3,  // first Triangle
        1, 2, 3   // second Triangle
    };
    glNamedBufferData(EBO, indices.size() * sizeof(GLuint), indices.data(), GL_STATIC_DRAW);

    GLuint VAO;
    glCreateVertexArrays(1, &VAO);

    int vaoBindingPoint = 0;
    glVertexArrayVertexBuffer(VAO, vaoBindingPoint, VBO, 0, 3 * sizeof(float));
    int attribPos = 0;
    glVertexArrayAttribFormat(VAO, attribPos, 3, GL_FLOAT, GL_FALSE, 0);
    glVertexArrayAttribBinding(VAO, attribPos, vaoBindingPoint);
    glEnableVertexArrayAttrib(VAO, attribPos);

    glVertexArrayElementBuffer(VAO, EBO);

    glUseProgram(shaderProgram);
    glBindVertexArray(VAO);
    glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);

    while (true)
    {
	auto new_ticks = SDL_GetTicks64();
	SDL_Event ev;
	while (SDL_PollEvent(&ev))
	{
	    if (ev.type == SDL_KEYUP && ev.key.keysym.sym == SDLK_ESCAPE)
	    {
		return 0;
	    }
	}
	
        glClearColor(0.1f, 0.2f, 0.3f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT);
        glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0);
	SDL_GL_SwapWindow(window);

	auto end_ticks = SDL_GetTicks64();
	if (end_ticks - new_ticks < 100)
	{
	    SDL_Delay(100 - (end_ticks - new_ticks));
	}
    }

    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);
    glDeleteBuffers(1, &EBO);
    glDeleteProgram(shaderProgram);

    return 0;
}

And the two dead simple shaders, vertex one

#version 460 core
layout (location = 0) in vec3 aPos;
void main()
{
  gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);
}

and fragment one

#version 460 core
layout (location = 0) out vec4 FragColor;
void main()
{
   FragColor = vec4(1.0f, 0.5f, 1.0f, 1.0f);
}

These have to be compiled to SPIR-V first with

glslangValidator -G vert.glsl -o gl3-vert.spv
glslangValidator -G frag.glsl -o gl3-frag.spv

And that's it. Now, back to re-learning matrix calculations. Full source code following https://learnopengl.com is https://gitlab.com/pclouds/learnopengl


Tác giả: pclouds | Liên kết tĩnh | Linux

Wed Oct 2 10:35:56 PM CEST 2024

From Pathetic to Buyo God

As usual, didn't listen and just focused on visual cue, and then panicked and clicked too fast anyway. Buyo game in Ishin isn't bad once you get the rhythm (except Samurai Enbu). Heartbeat is quite relaxing to do. Geisha level is brutal though with the last phase using both hands at the same time? Second class isn't too bad.


Tác giả: pclouds | Liên kết tĩnh | Game

Wed Sep 25 06:23:24 AM CEST 2024

Almost perfect half day

Sunrise at 6:39 and set at 6:37.


Tác giả: pclouds | Liên kết tĩnh