It’s almost the end of the semester; for the last release 0.4 of the OSD course at Seneca, we’re tasked to deliver meaningful work in the shape of contributions to projects. Luckily, I know what to do: over the last couple of months, one of my sources of satisfaction has been to contribute to ClangIR and generally to the LLVM project, with some sporadic contributions to other projects that spark my interest (Both IREE and wgpu). My planning for this might have been done a little bit late, mainly stuff that got in the way. I really wish I had the time to spend most of my effort on compilers and generally understanding what’s behind abstractions (on the side of also working on a simple RISC-V emulator, which I’m building with the purpose of un…
It’s almost the end of the semester; for the last release 0.4 of the OSD course at Seneca, we’re tasked to deliver meaningful work in the shape of contributions to projects. Luckily, I know what to do: over the last couple of months, one of my sources of satisfaction has been to contribute to ClangIR and generally to the LLVM project, with some sporadic contributions to other projects that spark my interest (Both IREE and wgpu). My planning for this might have been done a little bit late, mainly stuff that got in the way. I really wish I had the time to spend most of my effort on compilers and generally understanding what’s behind abstractions (on the side of also working on a simple RISC-V emulator, which I’m building with the purpose of understanding, low level encoding), but hey, I’m in school so I don’t have a lot of options here.
According to my standards, a week might not be enough to deliver meaningful work. Meaningful work is something you really feel proud of and takes time, maybe months, or even years. by the time I’m writing this. I’ve cleared out most of my assignments for the courses I took this semester. So my entire energy is gonna go towards this. I want to emphasize my enthusiasm for compilers and high-performance computing, specifically GPUS. Getting a bit more specific, I’m working on the inner logic related to how host and device communicate, which is something very explicit in CUDA and HIP compilation.
What I want to tackle this week:
ClangIR/LLVM:
I’ve been giving support to CUDA/HIP in ClangIR, and that’s still the way forward in my case. I’m very fortunate to have an interest in this field since there are actual engineers with decades of experience working on this project. People working in national laboratories (In the US) and big chip companies are putting effort into this, so this is a fantastic opportunity. It’s actually quite an interesting way to learn about heterogeneous computing, as I’ve mentioned in my last couple of posts, I’m not learning by following a tutorial or something like that; I’m actually learning by looking at the infrastructure that is around these paradigms of computing in one of the main projects that supports it, which is LLVM.
WebGPU/naga
I’ve also found interest in the way graphic API’s work. I may have talked about this in a different post, but I’m working on Naga (which is a sub-project of WGPU), which is the fundamental translation layer between WGSL and target-specific shading languages.There are a couple of optimization opportunities I wanna tackle which I’ll try to deliver before the end of the week.