On starting this blog

Hey! Cool that you're actually clicked on the first post of my dev log. I'll try to write a bit about what was going on (almost) each week here. To be honest, the main purpose of it is to re-digest the stuff I learned so far and to give myself a chance to re-structure things mentally. However, maybe some of the topics here turn out to be useful for someone else aswell. If you want to ask me about some specific topic, then feel free to contact me through any of the channels provided on my landing page.

St. Benno School Projects

After reflecting on this weeks special school course in the St. Benno school I think that we did a good job with respect to the circumstances we encountered. The organization was a bit subpar but we still managed to squeeze in a complete p5js project in the 2h time slot that was given to us. I think most of the folks that were visiting our course learned at least something altough I noticed that the bike game project we were presenting wasn't perfect for teaching purposes yet. It was also a bit unsatisfying to leave all the participants with prior experience idling while giving detailed explanations to the beginners. But I guess that's fine since the focus was on giving everyone a chance to experience what programming feels like rather than fostering the knowledge of the more experienced people.

One of the key insights of the project was, that ChatGPT is really usefull for generating easy-to-understand explanations and real world examples of things you already have expertise in. I'm not a huge fan of the recent AI-hype, but that seemed like a legit use case for it.

Private experiments with bevy (Instancing)

In my professional work context the performance of rendering thousands of 3D objects in bevy feels rather slow and we kind of started to look for performance optimizations in that area but it isn't that urgent. In some of my personal projects I also want render a huge amount (millions) of sprites with bevy and I also hit a performance bottleneck there. So it felt like now was a pretty good point in time to look into how performance improvements in bevy could be realized. One of the first things that came to mind was instancing.

Here is a small explanation why I came up with that specific idea:

I recently participated in a game jam because I was mainly searching for a reason to work on a project with friends from university and a game jam naturally gave us the chance while keeping the scope of the project bounded. We were using Godot as one of my friends suggested to try out a real game engine for it. I had no prior experience with Godot but it was reasonably easy to pick up. One of the things that was really impressive about the engine was how well it handled a lot of entities in the game world. My laptops fan didn't even make the slightest noise when we were rendering millions of sprites. Godot can probably provide that kind of performance since they made it easy to instance game entities. It's not really an performance optimization option from the user perspective but rather the defined way of handling multiple game objects that share some properties.

When I was looking up how to do instancing in bevy I was surprised that there isn't really an easy way of doing it. The only option a user has is to create a lot of boilerplate code and to work with the lower level APIs that are closer to wgpu. There is a general issue that discusses instancing and there is ongoing work on the topic however (issue #89). I spent the weeks evenings reading through (almost) all of the previous comments in the discussion which had some cool proof-of-concept examples but overall didn't provide a real solution yet. Hence, at the end of the week, I decided to give learning wgpu another try.

diving deeper into wgpu

I tried to learn about the low level APIs of wgpu a few months ago already. Ultimately I gave up due to some example not working anymore and because we started moving at the time. I used the book "Practical GPU Graphics with wgpu and Rust" for learning, but I think that is just a book which is based on the "Learn wgpu" website. Revisiting the first chapter with the things I learned in the mean time was very valuable. All the definitions of the whole render pipeline, bind groups and buffers make a lot more sense now. I think as a beginner you need a bit of time to digest all of the information that is dumped on you when starting to learn about wgpu because the learning curve is rather steep. I started patching the examples on my own and even found a chance to incorporate a new crate (async-winit) that I just found out about this week. I'll try to make progress up to the instancing chapter of the book next week. Let's see how this goes.


I found out about the crate through one of the bots I subscribed to on mastodon. The person which created the crate wrote a small, entertaining blog post about the rationale why it makes sense to have a async version of winit. As written earlier I tried the crate out and plugged it into all the wgpu examples instead of the "normal" ┬┤winit┬┤ crate. It worked perfectly fine and was a lot of fun. While figuring out how to use the crate, I noticed a few typos in the README which were causing some errors when compiling the example in the example section. That even gave me the chance to make a small contribution to that promising project! I was really happy about that interaction even though it wasn't that big of a deal.

configuring neovim for wgpu on NixOS

When setting up a dev environment for the wgpu tutorial, I finally wanted to get some wgpu plugins to work with neovim. After fighting a lot with both neovim and NixOS, I got wgsl.vim to work and I was able to install and start wgsl-analyzer. I lost the battle on integrating the tree sitter support for wgsl though since there is no such thing in nixpkgs yet and adding the grammar manually seemed borderline impossible with my set of nix-skills.

The wgls.vim plugin provides some nice highlighting for now and I don't really know what wgsl-analyzer is doing at the moment, but at least I got some things running here.