Reverse Proxy: Bridge + Portal Architecture
Xray's reverse proxy enables exposing a service behind NAT or firewall to the public internet. A Bridge (on the private side) initiates outbound connections to a Portal (on the public side), which then mux-multiplexes incoming client connections over those bridge tunnels.
High-Level Architecture
flowchart LR
subgraph Private Network
S[Local Service] <-->|direct| BW[BridgeWorker]
BW <-->|mux tunnel| B[Bridge]
end
subgraph Public Network
B <-->|outbound connection| P[Portal]
P <-->|Outbound handler| PW[PortalWorker]
PW <-->|mux dispatch| C[External Client]
endThe key insight: the Bridge initiates the connection, but the Portal controls the mux. From the perspective of the mux protocol:
- The Bridge side runs a
mux.ServerWorker(it accepts sub-connections) - The Portal side runs a
mux.ClientWorker(it creates sub-connections)
This inversion is what makes the reverse proxy work -- the side that accepts the TCP connection (Portal) is the one that dispatches new streams, while the side that initiated the TCP connection (Bridge) receives and handles them.
Core Types
Reverse
File: app/reverse/reverse.go
The top-level feature that holds all bridges and portals:
type Reverse struct {
bridges []*Bridge
portals []*Portal
}
func (r *Reverse) Init(config *Config, d routing.Dispatcher, ohm outbound.Manager) error {
for _, bConfig := range config.BridgeConfig {
b, _ := NewBridge(bConfig, d)
r.bridges = append(r.bridges, b)
}
for _, pConfig := range config.PortalConfig {
p, _ := NewPortal(pConfig, ohm)
r.portals = append(r.portals, p)
}
}It requires both routing.Dispatcher (for Bridge) and outbound.Manager (for Portal) via core.RequireFeatures().
Bridge
File: app/reverse/bridge.go
The Bridge lives on the private-side Xray instance. It manages a pool of BridgeWorker connections to the Portal.
type Bridge struct {
dispatcher routing.Dispatcher
tag string
domain string
workers []*BridgeWorker
monitorTask *task.Periodic
}Monitor loop: Runs every 2 seconds. If there are no active workers, or if the average connections per worker exceeds 16, it spawns a new BridgeWorker.
func (b *Bridge) monitor() error {
b.cleanup() // remove closed workers
var numConnections uint32
var numWorker uint32
for _, w := range b.workers {
if w.IsActive() {
numConnections += w.Connections()
numWorker++
}
}
// Spawn new worker if needed
if numWorker == 0 || numConnections/numWorker > 16 {
worker, _ := NewBridgeWorker(b.domain, b.tag, b.dispatcher)
b.workers = append(b.workers, worker)
}
}BridgeWorker
File: app/reverse/bridge.go
Each BridgeWorker establishes one mux tunnel to the Portal:
type BridgeWorker struct {
Tag string
Worker *mux.ServerWorker
Dispatcher routing.Dispatcher
State Control_State
Timer *signal.ActivityTimer
}Creation flow:
func NewBridgeWorker(domain string, tag string, d routing.Dispatcher) (*BridgeWorker, error) {
ctx := session.ContextWithInbound(context.Background(), &session.Inbound{Tag: tag})
// 1. Dispatch to the Portal's domain (routes through outbound)
link, _ := d.Dispatch(ctx, net.Destination{
Network: net.Network_TCP,
Address: net.DomainAddress(domain),
Port: 0,
})
// 2. Create mux.ServerWorker over this link
worker, _ := mux.NewServerWorker(context.Background(), w, link)
// 3. Set inactivity timeout (60 seconds)
w.Timer = signal.CancelAfterInactivity(ctx, terminate, 60*time.Second)
}The BridgeWorker implements routing.Dispatcher, so the mux.ServerWorker dispatches incoming sub-connections through it:
func (w *BridgeWorker) Dispatch(ctx context.Context, dest net.Destination) (*transport.Link, error) {
if !isInternalDomain(dest) {
// Real traffic: dispatch locally
return w.Dispatcher.Dispatch(ctx, dest)
}
// Control channel: handle internally
go w.handleInternalConn(link)
return link, nil
}Internal domain ("reverse"): Used for control messages between Bridge and Portal. The Bridge reads Control protobuf messages from the internal connection to track state:
func (w *BridgeWorker) handleInternalConn(link *transport.Link) {
for {
mb, err := reader.ReadMultiBuffer()
for _, b := range mb {
var ctl Control
proto.Unmarshal(b.Bytes(), &ctl)
if ctl.State != w.State {
w.State = ctl.State // ACTIVE or DRAIN
}
}
}
}Portal
File: app/reverse/portal.go
The Portal lives on the public-side Xray instance. It registers an outbound handler and manages mux client workers.
type Portal struct {
ohm outbound.Manager
tag string
domain string
picker *StaticMuxPicker
client *mux.ClientManager
}Start: Adds a custom Outbound handler to the outbound manager:
func (p *Portal) Start() error {
return p.ohm.AddHandler(context.Background(), &Outbound{
portal: p,
tag: p.tag,
})
}HandleConnection: Called when traffic arrives at the portal's outbound:
func (p *Portal) HandleConnection(ctx context.Context, link *transport.Link) error {
if isDomain(ob.Target, p.domain) {
// Bridge connection: create a mux.ClientWorker
muxClient, _ := mux.NewClientWorker(*link, mux.ClientStrategy{})
worker, _ := NewPortalWorker(muxClient)
p.picker.AddWorker(worker)
return nil
}
// Client connection: dispatch through mux to Bridge
return p.client.Dispatch(ctx, link)
}Two types of connections arrive at the Portal:
- Bridge connections (destination matches the configured domain): Create a new
PortalWorkerwrapping amux.ClientWorker - Client connections (any other destination): Multiplexed through the existing mux tunnels to the Bridge
PortalWorker
File: app/reverse/portal.go
Manages a single mux connection to a Bridge, including heartbeat and draining:
type PortalWorker struct {
client *mux.ClientWorker
control *task.Periodic
writer buf.Writer
reader buf.Reader
draining bool
counter uint32
timer *signal.ActivityTimer
}Heartbeat: Runs every 2 seconds, sends a Control message every 5th tick (10 seconds):
func (w *PortalWorker) heartbeat() error {
msg := &Control{}
msg.FillInRandom()
// Auto-drain after 256 total connections
if w.client.TotalConnections() > 256 {
w.draining = true
msg.State = Control_DRAIN
}
w.counter = (w.counter + 1) % 5
if w.draining || w.counter == 1 {
b, _ := proto.Marshal(msg)
return w.writer.WriteMultiBuffer(buf.MergeBytes(nil, b))
}
return nil
}The FillInRandom() method adds random padding (1-65 bytes) to the control message for traffic obfuscation:
func (c *Control) FillInRandom() {
randomLength := dice.Roll(64) + 1
c.Random = make([]byte, randomLength)
io.ReadFull(rand.Reader, c.Random)
}StaticMuxPicker
File: app/reverse/portal.go
Selects the least-loaded, non-draining PortalWorker for new connections:
func (p *StaticMuxPicker) PickAvailable() (*mux.ClientWorker, error) {
// 1. Try non-draining workers first, pick minimum active connections
// 2. If all are draining, pick from draining workers
// 3. Skip full workers
}Cleanup runs every 30 seconds to remove closed workers.
Control Protocol
The Bridge and Portal communicate state via Control protobuf messages over the internal domain channel:
message Control {
enum State {
ACTIVE = 0;
DRAIN = 1;
}
State state = 1;
bytes random = 99; // random padding
}- ACTIVE: The tunnel is available for new connections
- DRAIN: The tunnel is being retired (too many total connections)
When a Portal sends DRAIN, the Bridge sets its State and the bridge worker is no longer considered "active," causing the monitor to spawn a replacement.
Connection Flow Diagram
sequenceDiagram
participant Client
participant PortalOutbound as Portal Outbound
participant Portal
participant MuxTunnel as Mux Tunnel
participant Bridge as BridgeWorker
participant LocalService as Local Service
Note over Bridge, Portal: Bridge initiates tunnel
Bridge->>Portal: Connect to domain "example.reverse"
Portal->>Portal: isDomain match -> create PortalWorker
Portal->>Bridge: Mux established
Note over Portal, Bridge: Heartbeat loop
loop Every 10s
Portal->>Bridge: Control{State: ACTIVE}
Bridge->>Bridge: Update state
end
Note over Client, LocalService: Client request
Client->>PortalOutbound: Connect to target.com:80
PortalOutbound->>Portal: HandleConnection
Portal->>MuxTunnel: Dispatch via mux.ClientManager
MuxTunnel->>Bridge: New mux sub-stream
Bridge->>Bridge: BridgeWorker.Dispatch(target.com:80)
Bridge->>LocalService: Forward to local service
LocalService-->>Bridge: Response
Bridge-->>MuxTunnel: Response via mux
MuxTunnel-->>Portal: Response
Portal-->>Client: ResponseConfiguration
{
"reverse": {
"bridges": [
{ "tag": "bridge", "domain": "test.example.com" }
],
"portals": [
{ "tag": "portal", "domain": "test.example.com" }
]
}
}The tag on the Bridge side sets the inbound tag for dispatched traffic. The domain must match between Bridge and Portal configurations. Routing rules must direct traffic destined for the domain to the appropriate outbound on the Bridge side, and the Portal's tag must be used as an outbound in routing rules for client traffic.
Implementation Notes
The Bridge creates workers lazily via the monitor task. On first start, the monitor immediately detects
numWorker == 0and creates the first BridgeWorker.The
Outboundstruct registered by Portal implementsoutbound.Handlerwith minimal methods (Tag(),Dispatch(),Start(),Close()). It also has stubSenderSettings()andProxySettings()that return nil.The inactivity timer on BridgeWorker is 60 seconds. If no mux activity occurs, the worker terminates. The timer extends to 24 hours when the internal control connection is active.
The PortalWorker's timer is 24 hours, primarily to prevent leaked goroutines rather than for traffic management.
Bridge workers are cleaned up by the monitor's
cleanup()method, which checks bothIsActive()(state is ACTIVE and mux not closed) andClosed()(mux worker fully terminated).The mux implementation supports both TCP and UDP sub-streams. For UDP, the Portal applies
EndpointOverrideReader/EndpointOverrideWriterto remap addresses between original and target destinations.The constant
internalDomain = "reverse"is hardcoded and used as the sentinel for control channel traffic. Any destination with address"reverse"is intercepted for internal use.