> Fabric for AI

Accelerate AI workloads from end to end

Transform data into intelligence faster with seamless connectivity across clusters, models, workflows, and environments.

Compute + data movement drive AI success

Traditional networks can’t catch up to dynamic AI workloads that need rapid iterations, flexibility, visibility, and real time control. As AI models scale across regions and environments, the network is no longer just a transport layer. It’s become a core AI lifecycle orchestrator, enabling low latency movement of data, models, and GPU clusters to accelerate optimization end to end.

Built for AI data, models, and compute

Optimize AI connectivity with an ultra-low latency global fabric that links compute across all your clouds and regions.

Optimize AI operations worldwide

Automate orchestration

Configure connections instantly through APIs that integrate directly with orchestration systems.

Scale on demand

Establish high-bandwidth paths in minutes during training without deploying any hardware

Connect to clouds faster

Ingest training data rapidly with direct connections to AWS, Azure, Google Cloud, and more

Keep data secure

Move models and training data on dedicated, private Layer 2 links free from prying eyes and jitter.

Accelerate edge inference

Run inference at edge locations near your users for faster responses and lower latency.

fbaft6a

Interconnect across regions

Link major AI hubs like Singapore, Hong Kong, Tokyo, Jakarta, Los Angeles, and Frankfurt.

Singapore metropolitan network

Connect directly to Asia's core compute clusters with high-capacity bandwidth and low latency.

Data center overview

Core data centers

  • Global Switch TaiSeng
  • Equinix SG1


Edge data centers

  • Global Switch Woodlands
  • Equinix SG2
  • Equinix SG5
  • Telin 3 Data Center
  • DRT Jurong
  • Keppel DC SGP1
  • Equinix JH1

Networking capabilities

Network redundancy

  • Multiple route options provided
  • KMZ files available


Network latency

  • Less than 1 ms

Ports

  • Supports 10GE or 100GE
  • 400GE ports available in core locations on request


Underlying technology

  • DWDM
fabricdiagram3

Singapore-Johor dedicated line

Move AI data with ultra-low latency between Asia’s key AI hubs.

Network Capacity

  • Equinix JH1 – Equinix SG1: 2 x 1.6 T
  • Equinix JH1 – Global Switch: 2 x 1.6 T
  • Equinix SG1 – Global Switch: 2 x 1.6 T


Network Latency

  • Less than 2 ms


Ports

  • Supports 10GE or 100GE
  • 400GE ports available on demand

Scale AI without boundaries

Connect with an AI expert to power your AI workloads globally with reliable, ultra-low latency network performance.

Global service, local support

24/7 live technical support included

< 15 minute
response time

95% of tickets are
resolved in < 4 hours

Missed our webinars? Watch the on-demand recordings here!