Fomenko

Making a graphics abstraction layer

By Stéfan Oumansour|May 15, 2025

For a while we've been writing graphics rendering code twice, for both Metal and OpenGL. I was tired of maintaining two different backends that want to do the same thing, and since december I started writing a graphics abstraction layer as a side project. Last month we decided to integrate it into the game.

The idea was to provide an API for writing graphics code that was similar to Metal while not being too far from how other modern APIs work (though we don't know much about other APIs). The abstraction layer also must be implemented for OpenGL 4.6, which was probably the most difficult part.

General API

In our new API, most graphics objects are created using a struct that regroup all parameters for creating that object called a descriptor, and all objects require the caller to provide a name. In Metal and OpenGL you can create objects without giving them a name but often I end up regretting not giving a name when trying to debug things, so I decided to force the caller to do so.

I will not go too much into the details of the API, as looking at the Metal API already gives a decent idea, so here's a quick overview of the key-words in our API and what concepts they map to in Metal and OpenGL:

Our APIMetalOpenGL 4.6
GFX_Texture_DescMTLTextureDescriptorglTextureParameter calls
GFX_TextureMTLTextureGLuint/glCreateTextures
GFX_Sampler_State_DescMTLSamplerDescriptorglSamplerParameter calls
GFX_Sampler_StateMTLSamplerStateGLuint/glCreateSamplers
GFX_BufferMTLBufferGLuint/glCreateBuffers
GFX_Pipeline_State_DescMTLRenderPipelineDescriptorglEnable/glDisable/... calls
GFX_Pipeline_StateMTLRenderPipelineStateglGenProgramPipelines/glCreateVertexArrays
GFX_Pipeline_BindingN.A. (SPIR-V Cross stuff)N.A. (SPIR-V Cross stuff)
GFX_Render_Pass_DescMTLRenderPassDescriptorglNamedFramebufferTexture/glNamedFramebufferDrawBuffers calls
GFX_Render_PassMTLRenderCommandEncoderGLuint/glCreateFramebuffers
GFX_ShaderMTLLibrary/MTLFunctionGLuint/glCreateShaderProgramv

Shaders

One complicated part when creating a graphics abstraction layer is shaders. Not only do you have to decide what language to use, but there is also the concern of how to pass parameters to shaders.

For the language we decided to use Slang, because as a superset of HLSL it allowed us to write code that was close enough to it, so if we ever decided to switch to HLSL for some reason, it would be easier. I personally don't really like Slang, particularly typechecking is lacking in certain situations which causes wrong code generation and you can easily shoot yourself in the foot (i.e. you won't get an error message for incorrect code, that's really not good). Documentation is also not very good, I find it scarce and hard to read because it is often decorated with a lot of template parameters (and also who chose the name "Slang"? Does it seem like "slang" is easy to find on google? When will we stop using common english words to name software?!). Anyway, I don't like it.

We compile Slang code to SPIR-V then use SPIR-V Cross, which is a pretty nice tool that is easy to work with that allowed us to compile SPIR-V to either Metal or GLSL for OpenGL while performing the necessary code modifications, and getting information about shader parameters and their binding points. Using SPIR-V Cross also allows us to take some distance with Slang, and in the future to switch to any language that compiles to SPIR-V if we ever want to.

As hinted in the previous paragraph, we use reflection information to retrieve shader parameters. This allows us to not think too much about binding points and how they translate for different backends when writing shaders, especially for OpenGL as separating samplers and textures is not possible (more on that later). We have a helper function that populates a struct of all the binding information based on the field's name. I am certain that as more backends are supported we will have to revisit this part, as in other APIs this can be more rigid (see D3D12 root signatures).

OpenGL shenanigans

When writing the OpenGL implementation of the API I came across many obstacles that were not problems on the Metal side of things.

Loading Slang shaders

Compiling Slang for OpenGL is absolutely not supported contrary to what was advertised back when I started using it. This was the motivation for using SPIR-V Cross, which allowed us to transpile SPIR-V code generated from the Slang compiler into GLSL compatible with OpenGL.

Separate textures and sampler states

OpenGL does not allow having separate textures and sampler states, a texture "unit" always combines both the texture and the sampling parameters. This is not the case in modern APIs, and I believe in Metal you always have to separate texture and samplers. There is a very handy function in SPIR-V Cross to solve this: spvc_compiler_build_combined_image_samplers. For each texture and sampler bindings we store an array of all the sampler units that are associated to the binding, and when calling set_texture or set_sampler_state we call glBindTextureUnit and glBindSampler for all of the units in this array.

Separable programs and conflicting binding points

At some point later I wanted to separate the vertex and fragment shaders, so I had to be able to compile shaders that only contained either one or both of the two stages. This lead to conflicting binding points in the vertex and fragment shader. To fix this I had to create a relocation table using the reflection information from the shaders (glGetProgramInterfaceiv, glGetProgramResourceiv, glGetProgramResourceName, glGetUniformiv) upon creating a pipeline state, and modify the bindings information that are stored in the pipeline state. When calling set_pipeline_state I apply new binding points for all parameters in the fragment shader using glUniformBlockBinding, glShaderStorageBlockBinding and glProgramUniform1i.

When creating the GFX_Pipeline_State this is what we do:

OpenGL_Binding_Relocation :: struct {
    name : string;
    block_index_or_uniform_location : s32 = -1;
    original_binding_index : s32 = -1;
    relocated_binding_index : s32 = -1;
    associated_texture_units : []OpenGL_Binding_Relocation;
}

setup_pipeline_bindings :: (state : *GFX_Pipeline_State) {
    clone_bindings :: (bindings : []GFX_Pipeline_Binding) -> []GFX_Pipeline_Binding #must {
        result := NewArray(bindings.count, GFX_Pipeline_Binding);
        for * result {
            original_binding := bindings[it_index];
            it.* = original_binding;
            it.name = copy_string(it.name);

            if it.associated_texture_units.count > 0 {
                it.associated_texture_units = NewArray(it.associated_texture_units.count, u32);
                memcpy(it.associated_texture_units.data, original_binding.associated_texture_units.data, original_binding.associated_texture_units.count * size_of(u32));
            }
        }

        return result;
    }

    state.vertex_stage_bindings = clone_bindings(state.vertex_shader.bindings);

    if !state.fragment_shader {
        return;
    }

    state.fragment_stage_bindings = clone_bindings(state.fragment_shader.bindings);
    state.fragment_stage_binding_relocations = NewArray(state.fragment_stage_bindings.count, OpenGL_Binding_Relocation, initialized=true);

    first_available_fragment_buffer_binding : s32 = 0;
    first_available_fragment_texture_binding : s32 = 0;
    for state.vertex_stage_bindings {
        if it.type == {
            case .UNIFORM_BUFFER; #through;
            case .STORAGE_BUFFER;
                first_available_fragment_buffer_binding = max(first_available_fragment_buffer_binding, it.index + 1);

            case .TEXTURE; #through;
            case .SAMPLER_STATE;
                if it.associated_texture_units.count > 0 {
                    for unit : it.associated_texture_units {
                        first_available_fragment_texture_binding = max(first_available_fragment_texture_binding, cast(s32, unit + 1));
                    }
                } else {
                    first_available_fragment_texture_binding = max(first_available_fragment_texture_binding, it.index + 1);
                }
            }
    }

    fragment_program := state.fragment_shader.handle;
    num_uniforms, num_uniform_blocks, num_shader_storage_blocks : s32;
    glGetProgramInterfaceiv(fragment_program, GL_UNIFORM, GL_ACTIVE_RESOURCES, *num_uniforms);
    glGetProgramInterfaceiv(fragment_program, GL_UNIFORM_BLOCK, GL_ACTIVE_RESOURCES, *num_uniform_blocks);
    glGetProgramInterfaceiv(fragment_program, GL_SHADER_STORAGE_BLOCK, GL_ACTIVE_RESOURCES, *num_shader_storage_blocks);

    SAMPLER_TYPES :: GLenum.[GL_SAMPLER_2D, /*...*/];

    for * state.fragment_stage_bindings {
        relocation := *state.fragment_stage_binding_relocations[it_index];

        if it.type == {
        case .UNIFORM_BUFFER; #through;
        case .STORAGE_BUFFER;
            if it.index < 0 {
                continue;
            }

            relocation.original_binding_index = it.index;
            relocation.relocated_binding_index = it.index + first_available_fragment_buffer_binding;
            it.index += first_available_fragment_buffer_binding;

            num := ifx it.type == .STORAGE_BUFFER then num_shader_storage_blocks else num_uniform_blocks;
            for i : 0..num - 1 {
                props := GLenum.[GL_BUFFER_BINDING];
                values : [1]s32;
                type : GLenum = xx ifx it.type == .STORAGE_BUFFER then GL_SHADER_STORAGE_BLOCK else GL_UNIFORM_BLOCK;
                glGetProgramResourceiv(fragment_program, type, xx i, xx props.count, props.data, size_of(type_of(values)), null, values.data);

                binding_index := values[0];
                if binding_index == relocation.original_binding_index {
                    name := cast(*u8, alloc(100));
                    len : u32;
                    glGetProgramResourceName(fragment_program, type, xx i, 100, *len, xx name);

                    relocation.name = .{count=xx len, data=name};
                    relocation.block_index_or_uniform_location = xx i;
                    break;
                }
            }

        case .TEXTURE; #through;
        case .SAMPLER_STATE;
            relocation.associated_texture_units = NewArray(it.associated_texture_units.count, OpenGL_Binding_Relocation, initialized=true);

            if it.index >= 0 {
                it.index += first_available_fragment_texture_binding;
            }

            for * unit : it.associated_texture_units {
                relocation_unit := *relocation.associated_texture_units[it_index];

                for i : 0..num_uniforms - 1 {
                    props := GLenum.[GL_LOCATION, GL_TYPE];
                    values : [2]s32;
                    type : GLenum = GL_UNIFORM;
                    glGetProgramResourceiv(fragment_program, type, xx i, xx props.count, props.data, size_of(type_of(values)), null, values.data);

                    location := values[0];
                    uniform_type := values[1];

                    if location < 0 || !array_find(SAMPLER_TYPES, cast(u32, uniform_type)) {
                        continue;
                    }

                    binding_index : s32;
                    glGetUniformiv(fragment_program, location, *binding_index);

                    if binding_index == relocation_unit.original_binding_index {
                        name := cast(*u8, alloc(100));
                        len : u32;
                        glGetProgramResourceName(fragment_program, type, xx i, 100, *len, xx name);

                        relocation_unit.name = .{count=xx len, data=name};
                        relocation_unit.block_index_or_uniform_location = xx location;
                        relocation_unit.original_binding_index = xx unit.*;
                        relocation_unit.relocated_binding_index = xx unit.*;
                        relocation_unit.relocated_binding_index += first_available_fragment_texture_binding;
                        break;
                    }
                }

                unit.* += xx first_available_fragment_texture_binding;
            }
        }
    }
}

And in set_pipeline_state:

for state.fragment_stage_binding_relocations {
    binding := state.fragment_stage_bindings[it_index];
    if binding.type == {
        case .UNIFORM_BUFFER;
            if it.block_index_or_uniform_location >= 0 {
                glUniformBlockBinding(fragment_program, xx it.block_index_or_uniform_location, xx it.relocated_binding_index);
            }

        case .STORAGE_BUFFER;
            if it.block_index_or_uniform_location >= 0 {
                glShaderStorageBlockBinding(fragment_program, xx it.block_index_or_uniform_location, xx it.relocated_binding_index);
            }

        case .TEXTURE; #through;
        case .SAMPLER_STATE;
            for unit : it.associated_texture_units {
                if unit.block_index_or_uniform_location >= 0 {
                    glProgramUniform1i(fragment_program, xx unit.block_index_or_uniform_location, xx unit.relocated_binding_index);
                }
            }
    }
}

Render passes and framebuffers

OpenGL requires the user to create a framebuffer beforehand when rendering which is a long lived object and is very expensive to create. In our API, we define render pass attachments dynamically right before submitting render commands, and it is not possible to create the framebuffer when beginning the render pass and destroying it when finishing the render pass. To handle this, the first time we do a render pass that uses textures A, B and C, we create a framebuffer with these textures and store it in a hash map. The next times these textures are used as attachments for a render pass, we re-use that framebuffer by finding it in the hash map. When either A, B or C is destroyed, we make sure that all framebuffers using the texture are also destroyed and removed from the cache.

What it looks like

Here's basically what rendering code looks like with the new API. Some things are made explicit and some are omitted so this is not a 1:1 representation of what we have:

Material_Bindings :: struct {
    albedo: GFX_Pipeline_Binding;
    normal_map: GFX_Pipeline_Binding;
    metallic_roughness_ao: GFX_Pipeline_Binding;
}

set_material_parameters :: (pass: *GFX_Render_Pass, bindings: Material_Bindings, material: Material) {
    set_texture(pass, bindings.albedo, material.albedo);
    set_texture(pass, bindings.normal_map, material.normal_map);
    set_texture(pass, bindings.metallic_roughness_ao, material.metallic_roughness_ao);
}

The_Bindings :: struct {
    frame_info: GFX_Pipeline_Binding;
    camera: GFX_Pipeline_Binding;
    meshes: GFX_Pipeline_Binding;
    material: Material_Bindings;
}

the_pipeline: GFX_Pipeline(The_Bindings);

init_the_pipeline :: () -> bool {
    desc: GFX_Pipeline_Desc;
    desc.vertex_layout = make_vertex_layout(Vertex, DEFAULT_VERTEX_BUFFER_INDEX);
    desc.color_formats[0] = .RGBA_UNORM8;
    desc.depth_format = .DEPTH_FLOAT32;
    desc.depth_state = .{enabled=true, write_enabled=true};

    return init_gfx_pipeline(*the_pipeline, "The Pipeline", desc, destroy_on_shader_reload=true);
}

render_pass :: (ctx: *Frame_Render_Context, mesh_buffer: *Mesh_Buffer, color: *GFX_Texture, depth: *GFX_Texture) {
    if is_null(*the_pipeline) {
        ok := init_the_pipeline();
        assert(ok);
    }

    pass_desc: GFX_Render_Pass_Desc;
    set_color_attachment(*pass_desc, 0, color);
    set_depth_attachment(*pass_desc, depth);
    clear_color(*pass_desc, 0, .{0,0,0,0});
    clear_depth(*pass_desc, 1);

    pass: GFX_Render_Pass;
    begin_gfx_render_pass(*pass, "Render Meshes", ctx.cmd_buffer, pass_desc);
    {
        set_pipeline_state(*pass, *the_pipeline);

        set_viewport(*pass, .{width=ctx.framebuffer_width, height=ctx.framebuffer_height});

        set_buffer(*pass, the_pipeline.vertex_stage.frame_info, get_frame_data_buffer(), ctx.buffer_offsets.frame_info);
        set_buffer(*pass, the_pipeline.fragment_stage.frame_info, get_frame_data_buffer(), ctx.buffer_offsets.frame_info);

        set_buffer(*pass, the_pipeline.vertex_stage.camera, get_frame_data_buffer(), ctx.buffer_offsets.camera);
        set_buffer(*pass, the_pipeline.fragment_stage.camera, get_frame_data_buffer(), ctx.buffer_offsets.camera);

        for mesh_buffer.draw_calls {
            set_vertex_buffer(*pass, DEFAULT_VERTEX_BUFFER_INDEX, it.vertex_buffer);

            set_buffer(*pass, the_pipeline.vertex_stage.meshes, get_frame_data_buffer(), it.buffer_offset);
            set_buffer(*pass, the_pipeline.fragment_stage.meshes, get_frame_data_buffer(), it.buffer_offset);

            set_material_parameters(*pass, the_pipeline.fragment_stage.material, it.material);

            draw_indexed_primitives(*pass, it.index_buffer, it.index_count, .UINT32, it.instance_count);
        }
    }
    end_gfx_render_pass(*pass);
}

draw_frame :: () {
    begin_gfx_frame();

    defer reset_frame_data_allocator();

    ctx: Frame_Render_Context;
    // Allocate data for frame_info, camera, meshes etc...

    cmd_buffer := create_gfx_command_buffer("Frame");
    ctx.cmd_buffer = *cmd_buffer;

    render_pass(*ctx, *main_camera_mesh_buffer, *camera_color_texture, *camera_depth_texture);

    execute_gfx_command_buffer(*cmd_buffer);

    submit_gfx_frame();
}

Final notes

Overall I took two months to build this and maybe two more to polish as a side project, and we integrated it all into Oleg and rewrote the renderer in around 20 days. I was careful to integrate the abstraction layer iteratively while keeping the old rendering code, and it proved to be very effective. I would say this mission is definitively a success as the integration of the abstraction layer happened pretty smoothly and quickly, and we didn't have many big regressions afterwards. It simplified greatly the renderer and allows us to put more effort into actually writing rendering code.

I will probably put the original graphics abstraction code that I wrote as a side project on GitHub at some point. If while you're reading this it's still not the case and you'd like to see it, don't hesitate to reach me out via e-mail!

–––
Don't hesitate to check out my github or reach out via oumansour.stefan@gmail.com