gRPC Transport
Introduction
The gRPC transport tunnels proxy traffic over HTTP/2 using the gRPC framework. Data is encapsulated in protobuf-defined Hunk messages sent through bidirectional streaming RPCs. This transport supports customizable service/stream names (to disguise as legitimate gRPC services), a "multi" mode that batches multiple buffers per message, connection pooling, keepalive, and authority header control. It works with TLS, REALITY, and uTLS fingerprinting.
Protocol Registration
Registered as "grpc" (transport/internet/grpc/grpc.go:3):
const protocolName = "grpc"- Dialer:
grpc/dial.go:37-39 - Listener:
grpc/hub.go:137-139 - Config:
grpc/config.go:11-15
Service Name Architecture
Old Style (Default)
When ServiceName does not start with /, it is treated as the classic gRPC service name. The stream names default to "Tun" and "TunMulti":
/GunService/Tun (single-buffer mode)
/GunService/TunMulti (multi-buffer mode)New Custom Path Style
When ServiceName starts with /, it is parsed as a full custom path (grpc/config.go:17-59):
// ServiceName = "/my/custom/path/StreamA|StreamB"
// serviceName = "my/custom/path"
// tunStreamName = "StreamA"
// tunMultiStreamName = "StreamB"The format is: /<service_path>/<tun_name>|<tun_multi_name>
On the client side for multi mode, the full path is used directly (no | splitting):
// ServiceName = "/my/custom/path/StreamB" (client multi mode)This allows operators to disguise gRPC traffic as any arbitrary gRPC service.
Dial Flow
Connection Pooling
getGrpcClient (grpc/dial.go:77-193) manages a global pool of grpc.ClientConn objects:
var (
globalDialerMap map[dialerConf]*grpc.ClientConn
globalDialerAccess sync.Mutex
)Connections are keyed by {Destination, MemoryStreamConfig}. An existing connection is reused unless its state is connectivity.Shutdown (dial.go:89-91).
Client Connection Setup
When creating a new gRPC client connection (grpc/dial.go:93-193):
- Backoff: Exponential backoff starting at 500ms, max 19s, jitter 0.2 (
dial.go:94-102) - Context dialer: Custom
grpc.WithContextDialerthat:- Calls
internet.DialSystemfor raw TCP connection - Applies TLS (standard or uTLS) if configured
- Applies REALITY if configured
- Propagates outbound session context (
dial.go:103-146)
- Calls
- Insecure credentials: Always
grpc.WithTransportCredentials(insecure.NewCredentials())because TLS is handled at the raw connection level, not via gRPC's credential system (dial.go:148) - Authority: Set from config, or TLS ServerName, or destination domain (
dial.go:150-158) - Keepalive: Optional
ClientParameterswith configurable idle timeout, health check timeout, and permit-without-stream (dial.go:160-166) - Initial window size: Optional gRPC flow control window (
dial.go:168-170) - User-Agent override: Uses reflection to set user-agent, removing the default
grpc-go/versionsuffix (dial.go:184-201)
Stream Establishment
dialgRPC (grpc/dial.go:51-75) opens the appropriate stream:
func dialgRPC(ctx context.Context, dest net.Destination,
streamSettings *internet.MemoryStreamConfig) (net.Conn, error) {
grpcSettings := streamSettings.ProtocolSettings.(*Config)
conn, _ := getGrpcClient(ctx, dest, streamSettings)
client := encoding.NewGRPCServiceClient(conn)
if grpcSettings.MultiMode {
grpcService, _ := client.(encoding.GRPCServiceClientX).TunMultiCustomName(
ctx, grpcSettings.getServiceName(), grpcSettings.getTunMultiStreamName())
return encoding.NewMultiHunkConn(grpcService, nil), nil
}
grpcService, _ := client.(encoding.GRPCServiceClientX).TunCustomName(
ctx, grpcSettings.getServiceName(), grpcSettings.getTunStreamName())
return encoding.NewHunkConn(grpcService, nil), nil
}Listen Flow
Server Setup
grpc.Listen (grpc/hub.go:53-135) creates a gRPC server:
func Listen(ctx context.Context, address net.Address, port net.Port,
settings *internet.MemoryStreamConfig, handler internet.ConnHandler) (internet.Listener, error) {
// ...
s = grpc.NewServer(options...)
// Register with custom names:
encoding.RegisterGRPCServiceServerX(s, listener,
grpcSettings.getServiceName(),
grpcSettings.getTunStreamName(),
grpcSettings.getTunMultiStreamName())
// ...
s.Serve(streamListener)
}Stream Handlers
The Listener struct implements GRPCServiceServer (grpc/hub.go:20-42):
func (l Listener) Tun(server encoding.GRPCService_TunServer) error {
tunCtx, cancel := context.WithCancel(l.ctx)
l.handler(encoding.NewHunkConn(server, cancel))
<-tunCtx.Done()
return nil
}
func (l Listener) TunMulti(server encoding.GRPCService_TunMultiServer) error {
tunCtx, cancel := context.WithCancel(l.ctx)
l.handler(encoding.NewMultiHunkConn(server, cancel))
<-tunCtx.Done()
return nil
}The handler blocks on tunCtx.Done(), keeping the gRPC stream alive until the connection is closed.
Custom Service Registration
RegisterGRPCServiceServerX (grpc/encoding/customSeviceName.go:57-60) creates a custom grpc.ServiceDesc:
func RegisterGRPCServiceServerX(s *grpc.Server, srv GRPCServiceServer,
name, tun, tunMulti string) {
desc := ServerDesc(name, tun, tunMulti)
s.RegisterService(&desc, srv)
}The ServerDesc (customSeviceName.go:9-30) generates a service descriptor with:
- Custom
ServiceName - Two bidirectional streams with custom names
- Both
ServerStreams: trueandClientStreams: true
Wire Format
Protobuf Messages
message Hunk {
bytes data = 1;
}
message MultiHunk {
repeated bytes data = 1;
}Single Mode (Tun)
Each Write call sends one Hunk with the data bytes:
// encoding/hunkconn.go:131-141
func (h *HunkReaderWriter) Write(buf []byte) (int, error) {
err := h.hc.Send(&Hunk{Data: buf[:]})
return len(buf), nil
}Reading fetches one Hunk at a time and copies from its Data field:
// encoding/hunkconn.go:91-105
func (h *HunkReaderWriter) Read(buf []byte) (int, error) {
if h.index >= len(h.buf) {
h.forceFetch() // Recv() next Hunk
}
n := copy(buf, h.buf[h.index:])
h.index += n
return n, nil
}Multi Mode (TunMulti)
Multi mode batches multiple buffers in a single gRPC message:
// encoding/multiconn.go:115-134
func (h *MultiHunkReaderWriter) WriteMultiBuffer(mb buf.MultiBuffer) error {
hunks := make([][]byte, 0, len(mb))
for _, b := range mb {
if b.Len() > 0 {
hunks = append(hunks, b.Bytes())
}
}
h.hc.Send(&MultiHunk{Data: hunks})
}This reduces per-message overhead when multiple small writes are batched.
Network Flow
sequenceDiagram
participant Client
participant gRPC Client
participant HTTP/2
participant gRPC Server
participant Server
Client->>gRPC Client: Write(data)
gRPC Client->>HTTP/2: DATA frame (Hunk{data})
HTTP/2->>gRPC Server: DATA frame
gRPC Server->>Server: Read() -> data
Server->>gRPC Server: Write(response)
gRPC Server->>HTTP/2: DATA frame (Hunk{response})
HTTP/2->>gRPC Client: DATA frame
gRPC Client->>Client: Read() -> responseConnection Wrapping
HunkConn
NewHunkConn (encoding/hunkconn.go:41-73) wraps a gRPC stream as net.Conn:
- Uses
cnc.NewConnectionfromcommon/net/cncto build anet.Conn - Extracts remote address from gRPC
peer.FromContext - Supports
x-real-ipmetadata header for real IP passthrough
MultiHunkConn
NewMultiHunkConn (encoding/multiconn.go:37-69) is similar but uses ConnectionInputMulti/ConnectionOutputMulti for batch buffer operations.
Both types implement StreamCloser for CloseSend() to signal end of client-side stream.
TLS and Security
Client-side TLS
TLS is handled at the raw connection level in the context dialer (grpc/dial.go:128-143):
if tlsConfig != nil {
config := tlsConfig.GetTLSConfig()
if fingerprint := tls.GetFingerprint(tlsConfig.Fingerprint); fingerprint != nil {
return tls.UClient(c, config, fingerprint), nil
} else {
return tls.Client(c, config), nil
}
}
if realityConfig != nil {
return reality.UClient(c, realityConfig, gctx, dest)
}This bypasses gRPC's built-in TLS, using insecure.NewCredentials() at the gRPC level.
Server-side TLS
On the server, TLS is handled differently -- via gRPC's credential system (grpc/hub.go:82-85):
if config != nil {
options = append(options, grpc.Creds(credentials.NewTLS(
config.GetTLSConfig(tls.WithNextProto("h2")))))
}REALITY is handled by wrapping the listener (hub.go:126-128):
if config := reality.ConfigFromStreamSettings(settings); config != nil {
streamListener = goreality.NewListener(streamListener, config.GetREALITYConfig())
}Implementation Notes
- Connection reuse: gRPC multiplexes streams over a single HTTP/2 connection. The
globalDialerMapcachesClientConnobjects to avoid reconnecting for every new proxy connection. - Authority header: Critical for CDN/reverse-proxy scenarios. Prioritized: explicit config > TLS ServerName > destination domain (
dial.go:150-158). - User-Agent hack: gRPC-Go unconditionally appends
grpc-go/<version>to user-agent. Xray usesreflect+unsafe.Pointerto overwrite this (dial.go:197-201), defaulting to a Chrome user-agent string. - URL-encoded names: Service and stream names are URL-path-escaped to ensure valid gRPC paths (
config.go:17-58). - passthrough resolver: The
grpc.NewClientcall usespassthrough:///scheme to disable gRPC's DNS resolution, since Xray handles resolution itself (dial.go:179-180). - Server blocking:
Tun/TunMultihandlers block ontunCtx.Done(). The context is cancelled when theHunkReaderWriteris closed, which unblocks the handler and ends the gRPC stream. - No header obfuscation: Unlike TCP transport, gRPC does not support
ConnectionAuthenticatorheader wrapping.