6 min readJust now
–
Exploring how MCP might look rebuilt on gRPC with typed schemas
Press enter or click to view image in full size
I read this blog a few weeks ago and even defended MCP for where it made sense. But I’ve never worked with gRPC. Each time I came across a new MCP issue or library, I revisited the blog to see if it offered relevant insights. So, I spent a couple of days to learn more about gRPC and see what a gRPC version of tool calling protocol looks like. For simplicity, I’ll refer to this as gMCP.
*Note: This is a highly opinionated take of an engineer who knows nothing about gRPC and a little something about MCP on seeing what MCP would look…
6 min readJust now
–
Exploring how MCP might look rebuilt on gRPC with typed schemas
Press enter or click to view image in full size
I read this blog a few weeks ago and even defended MCP for where it made sense. But I’ve never worked with gRPC. Each time I came across a new MCP issue or library, I revisited the blog to see if it offered relevant insights. So, I spent a couple of days to learn more about gRPC and see what a gRPC version of tool calling protocol looks like. For simplicity, I’ll refer to this as gMCP.
Note: This is a highly opinionated take of an engineer who knows nothing about gRPC and a little something about MCP on seeing what MCP would look like in the gRPC world.
So here goes nothing.
Quick Recap
The blog from Julien Simon mentioned that MCP should’ve built on top of gRPC. Here are some of the red flags he raised :
1. Opting for schemaless JSON 2. Lacking generated bindings across client and server 3. Mixing stateful and stateless operations 4. Treating observability as optional 5. Dependency on a constellation of third party libraries 6. Following a patchwork system 7. Omitting schema versioning for servers (tool updates may break clients)
You can read more here:
The Process
I used NotebookLM to learn about gRPC, ChatGPT Deep Research and Claude for ideation, and Cursor (with gpt-5-high) and Claude Code for implementations.
Envisioning gMCP
I didn’t set out to rebuild MCP fully — my goal is simply to validate the key concerns raised. Hence the concentration is on the essence of MCP — tool calls. From my limited experience using and building with MCP, I haven’t yet used prompts, resources, elicitation, sampling, roots. Also, I skipped initialization and also removed sessions completely.
gMCP is tied to the way gRPC works — using it as much as possible to see how far I can reach. I wanted to use gRPC’s strong typing and multi-language binding capabilities but each tool needs to have a dynamic schema. Hence the schema needs to be shared dynamically on the wire at runtime. This is a solved problem in gRPC with server reflection: *gRPC Server Reflection provides information about publicly-accessible gRPC services on a server, and assists clients at runtime to construct RPC requests and responses without precompiled service information. *The discovery can also be powered by asking the server to keep a generated FileDescriptorSet at a well-known URI.
gRPC uses Protocol Buffers but AI understands JSON/text better. A simple solution that I’m using is converting buffers to JSON on the client side.
With this, the core mcp v0 consists of 2 proto files:
- mcp.proto : the MCP service definition
- server_meta.proto : a standard server meta description
// mcp.protosyntax = "proto3";package mcp.v0;import "google/protobuf/any.proto";option go_package = "./proto/mcp/v0";service McpService { rpc ListTools (ListToolsRequest) returns (ListToolsResponse); rpc CallTool (ToolCallRequest) returns (stream ToolCallChunk);}message ListToolsRequest { string cursor = 1; int32 page_size = 2;}message ListToolsResponse { repeated Tool tools = 1; string next_cursor = 2; // NOTE: no server_id/protocol_version here (embedded in descriptors via FileOptions).}message Tool { string name = 1; string title = 2; string description = 3; string input_type = 4; // fully-qualified proto message name string output_type = 5; // fully-qualified proto message name map<string,string> annotations = 9; // e.g., {"idempotent":"true"}}message ToolCallRequest { string name = 1; // must match Tool.name google.protobuf.Any typed_arguments = 2; // must match Tool.input_type string request_id = 3; // tracing/idempotency}message ToolCallChunk { oneof payload { google.protobuf.Any result = 1; // packed Tool.output_type ToolError error = 2; // terminal } uint32 seq = 9; // 0..N, per-call seq bool final= 10; // exactly one final=true ends stream}message ToolError { enum Code { UNKNOWN = 0; INVALID_ARGUMENT = 1; NOT_FOUND = 2; PERMISSION_DENIED = 3; DEADLINE_EXCEEDED = 4; INTERNAL = 5; UNAVAILABLE = 6; } Code code = 1; string message = 2; google.protobuf.Any details = 3;}
// server_meta.protosyntax = "proto3";package mcp.v0.meta;import "google/protobuf/descriptor.proto";option go_package = "./proto/mcp/v0";message ServerMeta { string server_id = 1; // e.g., "acme-weather" string server_version = 2; // e.g., "1.0.0" (bump on ANY schema change)}extend google.protobuf.FileOptions { ServerMeta mcp_server_meta = 777001;}
Server side
gMCP servers have the following responsibilities :
- Define proto files with server-specific input and output types for tools.
syntax = "proto3";package weather;option go_package = "./proto";message GetWeatherRequest { string location = 1; string units = 2;}message GetWeatherResponse { double temperature_c = 1; string conditions = 2; uint32 humidity = 3;}
- Implement the MCP-style API (ListTools and CallTool)
type simpleService struct { mcpv0.UnimplementedMcpServiceServer}func (s *simpleService) ListTools(ctx context.Context, req *mcpv0.ListToolsRequest) (*mcpv0.ListToolsResponse, error) { tools := []*mcpv0.Tool{ { Name: "get_weather", Title: "Get Weather", Description: "Get current weather conditions for a location", InputType: "weather.GetWeatherRequest", OutputType: "weather.GetWeatherResponse", Annotations: map[string]string{"idempotent": "true"}, } } return &mcpv0.ListToolsResponse{ Tools: tools, }, nil}
- Enable gRPC server reflection and register the server descriptor (including server ID and version), so clients can query input/output types provided via ListTools.
func main() { // Inject server meta into reflection descriptors so clients can read it registerServerMeta() lis, err := net.Listen("tcp", ":8443") if err != nil { log.Fatalf("Failed to listen: %v", err) } s := grpc.NewServer() service := &simpleService{} mcpv0.RegisterMcpServiceServer(s, service) reflection.Register(s)}
Client side
In gMCP, clients do need to follow an installation workflow. It goes as follows:
- Connect to the server.
- Fetch server metadata via reflection, including the current version.
- Fetch tools using ListTools, which now includes
input_type
andoutput_type
. These are fully qualified proto message names where servers would define the types and the client can fetch them using reflection. - Resolve tool schemas via reflection (or the downloaded descriptor set) and build a lightweight internal registry which is verified by the user.
- Cache descriptors by server identity and version, refreshing only when the server version changes. Most of it is implemented in client/ts/reflection.ts
This is similar to how MCP servers are installed. But the clients can now validate the types on the UI in addition to the name, description of the tools. Clients may break with unexpected type changes. Users will therefore need to validate any schema updates
The internal registry lets the client side code validate user or LLM inputs as well as responses against the expected schema — catching missing fields, wrong types, bad enums or other rules. When a call is made, the client packs the validated input into a correctly typed google.protobuf.Any, and when results stream back, it decodes the Any into readable JSON for display and downstream use. In other words: strong types on the wire, human and LLM‑friendly JSON at the edges.
I implemented a client side Node server that interacts with the gMCP Go server and a Vite-based UI to test this out.
All of the code for the protocol definitions, Go server, TS client is available at https://github.com/bh-rat/gMCP
Final thoughts
These are my takeaways :
- gRPC’s strong typing and validation support can help MCP a lot. Having structured schemas and validations will become more important. MCP’s addition of output_schema shows they agree.
- gRPC’s multi-language custom binding creation is great and well suited for the interoperability demanded by the languages. I wrote a go server and used it with a TypeScript based client.
- gRPC ecosystem does come with a good suite of battle-tested tools : I used Kong for API gateway, which comes with a whole host of plugins — https://developer.konghq.com/plugins/ (used jwt, rate limiting, open telemetry, prometheus, jaeger tracing)
- gRPC has a learning and adoption curve — but maybe that’s good considering what’s happening in the MCP space right now.
- Direct HTTP/2 gRPC from browsers isn’t supported; gRPC-Web is supported via Envoy/Kong/etc so browsers cannot directly make these requests.
- gRPC adoption has largely been used for internal use-cases. JSON has been the preferred way for public-facing APIs. This was a valuable learning exercise that I completed in just a few days. Smart folks at Sequoia did say they’re looking forward to a lot more to be done in the protocol space and I agree.
Overall, I agree the MCP community is re-solving some existing problems. But if that’s the cost of simplicity and broader adoption, is it worth it?
Side note: Between gpt-5-high and Claude — gpt-5-high won big time in this task. gpt-5-high implemented, debugged, and answered with very high accuracy whereas claude hallucinated & lied.
Have any feedback or want to talk about MCP? Feel free to connect and DM me on LinkedIn.