This is my interview with Jean-Baptiste Kempf (JBK), CTO of Scaleway, a fast-growing and ambitious public cloud focused on serving the European market. It was fun to ask JBK about the challenges of scaling a massive public cloud in the age of the AI buildout. We also talked about where and why Scaleway is using Rust. And, I learned that JBK is an incredibly prolific engineer and entrepreneur with a deep personal investment in the Rust community. To see jobs available at this and other cool rust companies, check out our extensive rust job board.
Want to advertise here? Reach out! filtra@filtra.io
Drew: Scaleway is a European cloud provider and a subsidiary of the Iliad Group. Can you tell me about the Iliad Group? I think some people in our audience mig…
This is my interview with Jean-Baptiste Kempf (JBK), CTO of Scaleway, a fast-growing and ambitious public cloud focused on serving the European market. It was fun to ask JBK about the challenges of scaling a massive public cloud in the age of the AI buildout. We also talked about where and why Scaleway is using Rust. And, I learned that JBK is an incredibly prolific engineer and entrepreneur with a deep personal investment in the Rust community. To see jobs available at this and other cool rust companies, check out our extensive rust job board.
Want to advertise here? Reach out! filtra@filtra.io
Drew: Scaleway is a European cloud provider and a subsidiary of the Iliad Group. Can you tell me about the Iliad Group? I think some people in our audience might not be familiar with it.
JBK: I would be surprised if many people know about Iliad. Iliad is a telecom group. It started as a phone company and a xDSL company, which now focuses on fiber. It’s a classic Internet Service Provider (ISP) that operates a bit differently from the norm—closer to something like T-Mobile in the US—by doing things a bit differently and generally being cheaper. It started in the late ’90s and grew to become one of the larger ISPs in France, first with landlines, then fiber, and then mobile. It has since expanded into Italy, Poland, and the Netherlands, and is now around a $12 billion business.
Drew: You just mentioned that Iliad is huge in Europe. Scaleway also is really focused on Europe and specifically providing a European alternative to some of the global hyperscalers. That is a really important selling point for your platform, correct?
JBK: That’s right. If you look at the global scene, there are three US-based hyperscalers, maybe four if you count Oracle, and maybe two from China. The rest don’t really exist at that scale.
JBK: The challenge is that reaching scale requires scale to begin with, because that allows you to buy a lot of hardware, and there’s a lot of expensive software to develop before the hardware costs start coming down. There’s a big initial cost. But, once you reach at least a billion dollars in revenue per year, you can start scaling and competing. So far, there is no true hyperscaler competitor in Europe. The idea with Scaleway is to try to get there. It’s not there yet, but it’s experiencing significant growth, around 30% year-over-year.
JBK: Over the last two or three years, there’s been a lot of political distress globally—between China and the US, and frankly, between the US and everyone else. Because of this, ensuring that you have a fallback solution if things go wrong with US or Chinese hyperscalers is critical. That’s why there’s so much focus on European tech sovereignty, and that’s one of Scaleway’s main selling points.
JBK: Compared to other European alternatives, Scaleway is a true cloud provider. While it used to own its data centers (which have since been spun off), it now rents rooms inside data centers and functions as a software company. It’s intensely focused on software. Everything is cloud-native—you consume everything through APIs and Terraform. It used to be a hosting provider, but it has moved to an actual cloud provider. We’ve developed a lot, and now we offer everything you expect from a cloud provider. Everything is consumed the same way, with the same type of APIs, billing, and so on. It really looks like AWS or GCP, just smaller.
Drew: That’s very cool. I didn’t know there was specifically a billion-dollar threshold that you needed to pass to achieve sustainability.
JBK: The problem is that when you’re a cloud provider, you have high capital expenditure (CAPEX) costs. You are buying a ton of machines. When you start buying machines by the thousands, the cost goes down significantly.
JBK: Secondly, there are so many products you need to offer—around 880 products to be a true cloud provider in my estimation. Once that’s done, you can scale to many people. However, there is a minimum amount of time you need to invest. If you’re a small company, that engineering time costs a lot, while it’s less prohibitive if you’re larger.
JBK: Our approximation is that the investment required just to develop the software for a basic cloud provider is on the order of $1 billion. This is just developing the software: doing the Terraform, the Ansible, scaling everything, doing multi-data center failovers, and having everything on the API integration, and so on. This is before maintenance.
JBK: So, you do that over, perhaps, five years, but that’s still a $200 million investment per year. If you’re a $300 million company, that makes no sense, but it does if you’re a billion-dollar company. This is why you arrive with hyperscalers, of which there are so few. Also, when you’re a large cloud provider, you can even develop your own hardware. If you develop your own hardware, then there are additional advantages that you cannot achieve when you are smaller.
Drew: So it’s a scale economy business.
JBK: Yes, but it’s a very technical business as well. You need to be highly skilled, which also makes it more difficult.
Drew: Right. With so many products offered by the big public clouds, how are you deciding what to build, or do you feel like you have everything built already?
JBK: No, no, we cannot compete at all with the hyperscalers. However, you can see that there are base products—the common blocks. These are the layers close to the bare metal, such as Infrastructure as a Service (IaaS), going all the way up to serverless functions or Inference as a Service. These are useful for any type of vertical.
JBK: The hyperscalers go even further, offering vertical-specific and business-specific solutions for every industry, which is why they have so many products. But if you look closely, there’s a common core—you can count between 60 and 120 base services that everyone uses.
JBK: Our vision is to not compete with the vertical-specific, business-specific solutions; instead, we’re going to use a marketplace with third parties who are going to do that.
Drew: That makes sense. Do you make products that are specifically tailored for the European market? For example, are there special things you do to comply with GDPR or other regulations that serve Europe better?
JBK: Yes. For example, the way we develop Scaleway is in a very privacy-focused way. Everything is encrypted: block storage, object storage, file storage, VPCs, and networking. Everything is encrypted in a way that makes it difficult for us, the cloud provider, to access the data. We have made accessing data a nightmare for ourselves, and this has been ingrained from day one.
Drew: So, I guess if you’re a cloud provider today, especially one that’s trying to grow a lot, that means you are an AI infrastructure company kind of by default, right?
JBK: Yes, exactly.
JBK: The history of Scaleway is a bit complex. It started around the nineties as a website hosting service, then became a dedicated server service for a long time. About eight years ago, it began offering cloud services, and in the last three years, that growth accelerated significantly.
JBK: In these last three years, we started deploying a large number of GPUs. So far, we’ve deployed probably 6,000 to 7,000 large GPUs, including training clusters equivalent to 1,000 to 2000 units, which are the type of machines used by very large AI companies for both training and inference.
JBK: But this is only part of the business. The rest is the standard cloud. When you look at AI training, you need access to a lot of data to train the machine learning models. Therefore, you need good object storage, a good data storage solution, and a data warehouse. You are going to be working with data, which requires so much of the regular cloud stack anyway.
Drew: You know, I’ve never had the opportunity to ask someone who runs a cloud about this: Is it hard to get GPUs?
JBK: Yes and no—it depends on which ones you want and when you want them. If you are clever, you can allocate and ask for them in advance. Though, it is a very volatile market, to be honest.
Drew: So what specific cloud services are you building for AI?
JBK: In the end, what most people are consuming is called Inference as a Service. We run the models on our GPUs in the best way possible, and you pay based on the tokens outputted by the model and the complexity of the model itself. This is the most used AI product.
JBK: Then, of course, we have dedicated GPUs, virtual machines, bare-metal GPUs—a lot of those. Linked to that, we also have everything related to data pipelines, MLOps, DataOps, Jupyter Notebooks, and so on.
Drew: Okay, so it sounds like you have a pretty complete set of offerings.
JBK: Well, I’m not sure it’s complete, but at least it’s not ridiculous. We’re at a point where we offer a good alternative. It’s not as advanced as, say, Google Vertex AI or similar platforms, but we’re past the very beginning stages.
Drew: Right, it’s competitive. When I was researching, the commitment to only using renewable energy for your data centers really made me raise my eyebrows. Especially with the massive energy demands of AI, that sounds hard. How are you pulling that off?
JBK: Well, the thing is that most of our data centers are in France, and France’s energy mix is typically around 65-80% nuclear energy. So, if you look at the energy mix, we’re around 30 to 40 grams of CO2 per kilowatt-hour yearly.
JBK: When you compare that, Germany is about ten times that, around 300, and countries like Poland are at 700. So, I’m not sure it’s strictly "renewable," but it is a low-carbon electricity source. We are also deploying data centers in Sweden for the same reason—it has high hydro-power availability. The US energy mix is often catastrophic, probably around 300 grams of CO2 as well.
JBK: We care deeply about this. The second thing we do, which is interesting, is that two of our main data centers are Adiabatic data centers. That means they don’t really use traditional AC to cool things down. This decreases the percentage of overhead, so we consume less electricity, and the electricity we do consume is mostly low-carbon nuclear power.
Drew: Yeah, that’s really interesting. I wondered if that might be the case, because I knew that France was very heavily nuclear. So, being in France is a kind of advantage for you in terms of achieving that goal.
JBK: Yes, I think so. If you look at the worldwide electricity mixes, you’ll see that Quebec is quite good, and Sweden is quite good because of hydro-power. Parts of Canada and France are pretty good. Spain and Portugal are okay. I think some parts in the South of the US are also okay because of a large amount of solar power.
Drew: One of the things I came across when I was researching your AI work is your partnership with Kyutai. Can you tell me what Scaleway is doing to support them?
JBK: Yes. Kyutai is an AI company, but it’s a bit different from OpenAI, Mistral, or Anthropic. It’s mostly a research lab doing new AI models—a bit like what OpenAI was supposed to be initially, a nonprofit. For example, they created something quite interesting called Moshi, a model that uses pure audio. It was trained directly on audio data. Instead of using the typical audio assistant workflow—audio to text, then text to text with an LLM, and finally text to speech (using tools like Eleven Labs)—Moshi goes directly from audio to audio. It’s quite rare to see that, and Kyutai trains it on our large clusters.
Drew: Cool. I didn’t realize that Moshi was specifically audio-to-audio only.
JBK: That’s one of Moshi’s main use cases. Of course, it’s not the best LLM due to limited data, but it is a speech-to-speech model, which means it’s extremely, extremely fast.
Drew: Because you don’t have those translation steps that the other models use.
JBK: Exactly, because those steps are a bit too slow. Kyutai also made another model called Hibiki, which does the same thing, but for translation. So that’s audio-to-audio translation.
Drew: This discussion of Kyutai is reminding me of something you and I were talking about earlier. It feels like Paris has become a really important tech hub.
JBK: Yeah, I’d say that outside of the US and China, you have Israel and then you have Paris. Those are the major startup hubs lately. Paris has a lot of AI startups, which is pretty cool. This is a bit of a change, because in the past the core of AI startups in Europe would have been Berlin or maybe London.
Drew: Is there any reason why you think that’s been the case?
JBK: Yeah, 10 to 15 years ago, people, including Xavier Niel, the founder of the Iliad Group, decided they should do something. For example, we have Station F, which is one of the largest, if not the largest, startup incubator in the world, with places for 6,000 people. There was a huge push from the government to showcase what France can do. The French have a lot of very good engineers, so the push was to build something. Of course, there’s a lot of marketing around "French Tech," but it brought a lot more people to the ecosystem. Now, a lot of students actually want to work in startups rather than large groups. So there’s been a mindset change since about 2010.
Drew: Even within the Rust community, I’ve seen a real density of companies in Paris.
JBK: Yeah, I think the French education system focuses a lot on math. So, there are many engineers who are good at math, and you see a lot of people around compilers and languages. You’ve seen that before; there were things like Coq, which was a formal proof assistant, and a lot of work on functional languages, like OCaml. There’s been a large community around languages and compilers for a long time. I think this is why you also see that reflected in the Rust community.
Drew: Speaking of Rust, how does Rust fit into your tech stack? What do you use it for at Scaleway?
JBK: I personally use Rust in many ways including at my other jobs, but at Scaleway it’s used for almost all the networking products. This includes virtual VLANs, VPCs (Virtual Private Clouds), and managed VPNs—everything basically around low-level Software-Defined Networking (SDN).
JBK: Scaleway started mostly as a Python shop, then moved to Go for most projects. However, for projects that are lower-level and need to work closer to real-time, such as low-level storage or low-level networking, we use Rust.
Drew: That seems to be a pattern I’ve noticed with cloud companies: using Go as the default and then Rust when extra performance is needed.
JBK: The problem with Go is that it has a garbage collector, which is a big issue in certain contexts. It’s not a problem when you’re building a SaaS API, because if it slows down one client temporarily it’s less critical. You can spawn another session or manage pools. The garbage collector isn’t too much of a problem there.
JBK: However, as soon as you’re dealing with real-time performance, the garbage collector is horrible. If you’re the one user whose session hits the garbage collection pause, the performance is ruined. This is a huge problem. In my opinion, for anything that is real-time, networking, parsing, or multimedia, Rust is the best fit so far.
Drew: I think it’s a very natural fit for cloud because of speed and reliability. Are there any other reasons why you feel like it’s a really good fit outside of the normal ones?
JBK: Yes. People underestimate the power of ecosystems. For example, this is why so much of AI is done in Python. It’s because of PyTorch and the whole scikit-learn ecosystem. Similarly, why is so much modern networking being done in Rust? Because a lot of it was pushed by groups like Mozilla at the beginning. Now a huge ecosystem has grown. If you are going to do something like QUIC, everything is already available in Rust.
JBK: I also think C++ has become a nightmare. It pushes a lot of people away. I like simple languages, and I like the fact that there is one correct way of doing things in Rust. In C++, you can achieve the same thing in so many different ways. There’s templating, or almost functional programming, or recursive macros, or compile-time techniques. It’s just too complex. In Rust, there is one correct, idiomatic way to approach things.
JBK: Another factor is the quality of the Tokio ecosystem. There is a lot of async programming that is correctly done in Rust, which helps a lot. It makes asynchronous programming something that modern developers, who are used to something like Node, can easily adopt. That’s one of the main reasons I’ve seen for using Rust.
Drew: I think you’re right about ecosystems. I’ve had several other people mention that to me. Do you feel like there are things that are missing in the ecosystem right now that Scaleway would like to have?
JBK: To be honest, I don’t think so for Scaleway. Rust is a good fit, so I don’t think there’s much missing.
Drew: On your platform, have you seen growing demand for things that support Rust?
JBK: Yes, but not that much to be honest. I still think that Rust is a language for very technical companies. This includes cloud providers, people doing multimedia, some people doing AI, and those who are parsing a lot of data.
JBK: I’ve been involved in the global startup scene for a long time. You had the big Ruby on Rails movement, then it moved to a lot of Go, and then to a lot of Node. So far, I don’t see much change from the recent Node/Next.js shift. There’s been a bit less React, but Next.js has been so big. Node is still quite big.
JBK: I see Rust mostly in robotics, embedded systems, cloud providers, multimedia, and video games because real-time performance is important in those cases. That is mostly what I see; I don’t see much more adoption in the majority of what you’d call "normal" startups.
Drew: That makes a ton of sense. Those are the industries that I see growing for Rust as well. For a long time, there were a lot of people invested in the Rust for the web story—literally building web apps in Rust. Do you think that will materialize, or do you think that doesn’t really make sense?
JBK: I don’t think it will materialize. But, I think it will exist in a small way. For example, Actix is a very good framework, and I use it a lot on other projects. The fact is that the language grows with its community and the community creates an ecosystem focused on something. You cannot be good at everything. It’s the theory of comparative advantage. You focus on the areas where you are better than others. This defines your niche, which can be a very big niche, but you have to stay in it.
JBK: If you look closely at the web, it has native JavaScript support. It’s obvious there is a natural interaction between Node and the presence of JavaScript inside the web browser. You can argue you could do other things, and I know you can. For example, I do a lot of Rust personally to target WebAssembly. I don’t personally want to write any JavaScript. I’m originally a C and Assembly guy, and now I do Rust, Zig, and Assembly. I don’t want to do too much JavaScript. But, de facto, there will only be some specific niche cases where you see Rust on the web. I don’t think it’s going to be mainstream.
Drew: I agree. One of the things I like to do with these interviews is give people a look behind the curtain at what it’s like working at the companies we feature. Can you tell me about an interesting engineering problem that the team at Scaleway recently solved or took on?
JBK: It’s difficult to find just one because there are so many.
JBK: One of the invisible things that people don’t see at a cloud provider is that every region is usually composed of three Availability Zones. You have three data centers that are at least five or ten kilometers apart, but users want to use them as one single location. And, we as the provider want to hide that complexity.
JBK: For instance, you need to provide consistent VLAN and networking so that, to the user, it appears as if all their resources are in the same place. This requires a lot of converging algorithms to make it work across all products and users.
JBK: The two main places where you see this complexity at Scaleway are everything related to the VPC (Virtual Private Cloud), which is mostly done in Rust and uses some software called RR, and all the management of IPs. This is a complex problem. If you’ve done networking alone, it may seem easy, but it is not. The other major area is Object Storage. For example, in the Paris region, we host around 150 petabytes of data. That data is stored across several data centers because you need to be able to lose one and still maintain availability. The algorithms to converge these multiple data stores are absolutely invisible to the user. This is a crucial and cool problem that needs to be done in real-time. We used to do that in C, and now we are doing most of it in Rust.
Drew: I also saw that Scaleway is one of those companies that has a nickname for its employees. You call them Scalers. (Laughing)
JBK: (Laughing) Yeah, it’s a bit of a joke. The idea is to be a hyperscaler and to scale the company, so "Scalers" is a fun name.
Drew: What makes a Scaler different from an average software engineer?
JBK: I’ve created 10 companies and advised probably 30 startups over the last 20 years, and I’m active in many open source projects. What you see at Scaleway is that people are very competent. Running a cloud provider is complex. It’s a bit like the people I work with at VLC—the people are extremely good. A lot of them could work at Google, Apple, Mozilla, or Meta without any problem. So, the technical level at Scaleway is very high.
Drew: That makes sense, and I think it’s good to hear. You want your cloud provider to have a ton of in-house competence.
Drew: When I was reading your job descriptions, it sounded like Scaleway puts a huge emphasis on providing a really balanced life for its employees. Can you tell me more about that?
JBK: I think it might be difficult for Americans to hear this, but we have five to seven weeks of vacation, and those are actual vacations. People have normal work hours. They have some intense hours, but they are not working until two o’clock in the morning or 10 PM. By 7 PM, most people are gone. We have a lot of things happening on-site and lots of events. There is a large focus on training. This is all very different compared to a lot of startups, especially in the US, where you work much more.
JBK: Also, we try to focus on being very effective. That is very important because people forget that. Being effective means you can avoid spending too much time on meetings and other things that basically destroy your performance. One of the things I did as CTO was to reduce and kill many meetings so that engineers can actually engineer.
Drew: So this seems to be a very exciting time for Scaleway. You mentioned a really exceptional growth rate, and the AI boom is obviously very interesting for your business. What are you personally most excited about in the future of Scaleway?
JBK: I think people are starting to realize that we actually can be an alternative to American cloud providers. It’s not just words. It actually works today. This is really cool and makes so many people interested in what we’re doing.
JBK: Scaleway is reaching a point where I think we will continue seeing growth, with more people moving over. Because all the base products are now built, we can now focus on optimizing rather than just building features, and that optimization is one of the cool parts of working on the cloud.
Drew: You’re approaching a sort of tipping point.
JBK: Yes, I think so.
Drew: Is there anything else you wish that we had the chance to discuss?
JBK: Yeah, actually I’m doing lots of other things with Rust outside of Scaleway. For example, I’ve been working on the VLC media player project for 20 years. VLC is mostly in C and Assembly, but for the last few years, we’ve had a way to create VLC modules in Rust. VLC is built with around 500 modules, and some are now written directly in Rust because all the internal VLC APIs are exposed as Rust traits.
JBK: That project is interesting, but it’s also where we see some of the limits of Rust. In my opinion, Rust is not really good to integrate inside C. It works, but there are so many things you need to do with bindgen and Cargo. It’s not a pleasant experience, but once it works. Especially in VLC, where we parse a lot of files, formats, and networks, having the security and the borrow checker is useful. So, I’ve been doing that with Rust for the last six or seven years.
JBK: I am also launching a new project called Kyber, which is fully in Rust. It’s an SDK to control machines. It can power desktop remote computers like TeamViewer or Citrix, and also do remote 3D and remote AI rendering that you stream directly to a smaller device like your phone. It also handles the control of robots or drones at a distance. It’s really about extremely low-latency control of machines. Everything we do there is in Rust. It was full Rust from day one.
JBK: Rust is a pleasure in this context. Some parts of Kyber are based on VLC, so we support many platforms, but there is one big platform which is the web. We don’t write JavaScript there. We do everything in Rust and compile it all with the Rust compiler for WebAssembly. This is much better than the things I’ve done in the past using Emscripten. We can use the web APIs directly from Rust, and it has been a great experience.
Drew: That’s cool. I’ve heard of several companies—I think all robotics companies—that use Rust for everything in their stack. They also use it on the web because it makes it easy for their same software engineers to build that stuff.
JBK: For us, the core of Kyber is a custom networking protocol for real-time streaming of data. It’s the same code that runs on Windows, Linux, Mac, Android, Apple TV, iOS, Quest, Apple Vision, and the web. That is possible because we are using Rust.
Drew: Yeah, that’s a big deal.
JBK: So yeah, that is one of the other things that is outside of Scaleway, but I’m a big Rust user on those projects: VLC and Kyber.
Drew: You are very entrepreneurial and have so many projects going on.
JBK: Yeah, VLC is mostly an open-source project with a non-profit. But, it’s really cool to be doing many things. When you’re blocked on one project, you need time to process it in the background of your head. So, working on something else often unlocks the other parts. It’s a weird way of working, but it works fine for me.
Drew: I wanted to ask a follow-up about Kyber. What are you doing differently to achieve that super low latency that you mentioned?
JBK: For a long time, the focus in video has been on quality—high fidelity, 4K, HDR, and so on. But, when you’re controlling a machine, the video is more about visual feedback, especially when you are far from it. So, with Kyber, we don’t really care about the quality as much. Quality is nice, and we want it to be the best, but latency is more important because you are controlling a machine. That could be a car or a drone in the real world. Or, if it’s software, you want your clicks to go as fast as possible. If you’ve ever used VNC, clicking and waiting for the mouse in the menu is horrible.
JBK: So, everything is focused on having the lowest latency ever, even if it means degrading the quality some. This is a very different approach. We also think that on the client side, you can recreate some of the quality using machine learning if you have enough visual data. We focus only on the lowest latency possible, and then we figure out how to do bandwidth adaptation and increase quality. We start from the opposite point.
JBK: Also, we base everything on QUIC, which is very well integrated into the Rust ecosystem, and that was one of the big reasons for using Rust.
JBK: The other thing that is different about Kyber is that many remote control software solutions are very narrow. TeamViewer is for PCs; others are just for drones. The idea for Kyber was to create something that works everywhere. We achieved this by using a lot of modules and everything we know about FFmpeg and VLC. The difference in our stack between controlling a drone and controlling a computer is very small—hundreds of lines of code—because everything is modular, and we reuse a lot of what we did for VLC.
JBK: By reusing that, our server runs on Mac, Windows, Linux, Android, and iOS. Our client runs on Windows, Linux, Mac, Android, iOS, Apple TV, Android TVs, Chromebooks, iPads, Apple Vision, Quest, and so on. Because we use Rust, we also have the WebAssembly version. Before Scaleway, I was the CEO of a cloud gaming company, and supporting all those OSes took months of development. Here, we simply use a cross-platform solution called Rust.
JBK: We love the fact that the light object abstraction with Rust traits is very important. For all OSes, we need to be able to remote a mouse, keyboard, gamepads, USB over IP, printers over IP, copy-paste, and file transfer. We have an abstraction using traits that we serialize on the network. Then, you just focus on the integration. When you move to a new OS, you do the small integration, and it scales correctly, helping a lot with development time.
JBK: Those are the main things. We also do a ton of work on security, but that would need a whole other podcast to do. We’re doing things quite differently, but we care about speed, non-stop operation, and Rust is a good fit for that.
Drew: I think we could talk forever, but this is a good place to stop. Thank you so much. This was a fun conversation.
JBK: Thanks, Drew.
Know someone we should interview? Let us know: filtra@filtra.io