跳到主要内容

PRODUCTS

Products

From 1P entry-level computing to enterprise heterogeneous clustersWeCalc delivers unified-architecture, tiered solutions for every scale of on-premise AI

Best 1P Starter

WeCalc-B Basic

1P entry-level computing — launch your local AI pilot fast

WeCalc-B is built on a "minimum viable" philosophy, integrating compute, storage, and management into one device so enterprises and universities can establish a secure, local, and scalable AI computing base within 48–72 hours.

1 PFLOPS
Max Computing Power
4TB
Local NVMe Storage
48–72h
Deployment Time
¥2,000/mo
Financing Lease Starting
Compute: 1× General-purpose CPU + Optional GPU Accelerator
Storage: 4TB NVMe SSD
Network: 25G/100G Ethernet Interface
Computing Power: Up to 1 PFLOPS (with GPU accelerator)
Use Cases: Small-scale AI Inference, Data Analytics, Teaching & Training
Deployment: 48–72 hours turnkey delivery

Ideal Customer Types

  • Enterprises wanting to pilot first
  • Budget-conscious teams
  • University teaching & training

Recommended Industries

EducationManufacturingEnterprise ServicesGovernment

Lease from ¥2,000/month

WeCalc-B Basic
Small-scale AI InferenceData AnalyticsUniversity Teaching & TrainingDev/Test Environments

Core Value

  • A single unit can launch full computing services — ideal for pilot validation and lightweight deployment.
  • All data stays on your own devices, meeting data-sovereignty requirements for education, government, and R&D.

Delivery Model

  • Ships with integrated hardware and software — ready to deploy on arrival.
  • Supports one-click startup and remote O&M, reducing on-site implementation complexity.
Production Workhorse

WeCalc-P Professional

Mid-scale cluster for training and inference

WeCalc-P leverages multi-CPU, multi-GPU node clusters with EBOF all-flash storage architecture to deliver higher throughput, lower latency, and stronger scalability for training, inference, and edge analytics workloads.

12 PFLOPS
Max Computing Power
16×3.84TB
EBOF All-Flash Storage
100G RDMA
Low-latency Interconnect
48–72h
Typical Deployment
Compute: Multi-CPU + Multi-GPU Node Cluster
Storage: 16×3.84TB NVMe SSD, EBOF All-Flash Storage
Network: 100G RDMA Smart NIC, RoCEv2 Interconnect
Computing Power: Up to 12 PFLOPS
Use Cases: Mid-scale AI Training & Inference, Industrial Edge Computing
Deployment: 48–72 hours fast delivery, supports future expansion

Ideal Customer Types

  • Customers ready for production
  • Teams with defined use cases
  • Mid-scale training & inference projects

Recommended Industries

ManufacturingHealthcareFinanceSmart City

Most popular production-grade solution

WeCalc-P Professional
Mid-scale AI Training & InferenceIndustrial Edge ComputingMedical Imaging AnalysisSmart City

Core Value

  • Combines disaggregated storage-compute with all-flash storage to significantly boost data loading efficiency and throughput.
  • Bridges the gap from pilot to production with balanced performance, cost, and scalability.

Delivery Model

  • Pre-configured per scenario to reduce on-site integration time.
  • Provides compute, storage, and networking as an integrated turnkey delivery with O&M support.
Flagship Custom Solution

WeCalc-E Enterprise

Thousand-GPU heterogeneous cluster for HPC and large-scale training

WeCalc-E targets high-density, high-throughput, and high-reliability workloads. Supporting multi-node heterogeneous clusters and PB-level distributed storage pools, it is the flagship solution for enterprises and research institutions building on-premise computing infrastructure.

50+ PFLOPS
Flagship Computing Power
PB-level
Distributed Storage Pool
200G/400G
High-speed Interconnect
1000+ GPUs
Heterogeneous Cluster
Compute: Multi-Node Heterogeneous Cluster, supports 1000+ GPUs
Storage: PB-level Distributed Storage Pool
Network: 200G/400G High-speed Interconnect
Computing Power: Up to 50 PFLOPS and beyond
Use Cases: Large-scale Model Training, High-Performance Computing
Deployment: Custom delivery based on scale and industry requirements

Ideal Customer Types

  • Regional intelligent computing centers
  • Large-scale training platforms
  • Research computing platforms

Recommended Industries

ResearchAutonomous DrivingFinanceGovernment

Custom delivery based on business scale

WeCalc-E Enterprise
Large-scale Model TrainingHigh-Performance Computing (HPC)Autonomous Driving SimulationResearch Computing Platforms

Core Value

  • Handles ultra-large-scale training and HPC tasks, meeting demands for high concurrency, massive data, and complex models.
  • High-speed interconnect and distributed storage ensure data throughput and cluster stability for large workloads.

Delivery Model

  • Complete solution design for compute, networking, and storage based on business objectives.
  • Full-lifecycle delivery from site assessment to cluster go-live.

FEATURES

Core Features

From plug-and-play to modular expansion — WeCalc reduces computing center delivery complexity to a range that fits real enterprise deployments

Plug and Play

Ships with fully integrated hardware and software — no specialized setup required.

Single-Unit Operation

One device delivers a complete computing service, ideal for quick pilots.

One-Click Startup

Power on and launch computing services with one click, lowering deployment barriers.

Turnkey Delivery

End-to-end turnkey service from deployment to operation, completed in 48–72 hours.

Modular Expansion

Supports hot-swap expansion, scaling smoothly from a single unit to a cluster.

Multi-Hardware Support

Flexibly accommodates CPUs, GPUs, SSDs, and more.

Domestic HW Compatible

Huawei Ascend & Kunpeng certified, compatible with 90%+ of domestic GPUs.

COMPARISON

Product Specification Comparison

Specification
WeCalc-B
Basic
WeCalc-P
Professional
Recommended
WeCalc-E
Enterprise
Compute1×CPU + Optional GPUMulti-CPU + Multi-GPU ClusterThousand-GPU Heterogeneous Cluster
Storage4TB NVMe SSD16×3.84TB EBOFPB-level Distributed Storage
Computing Power≤1 PFLOPS≤12 PFLOPS≥50 PFLOPS
Network25G/100G Ethernet100G RDMA200G/400G Interconnect
Deployment48–72 hours48–72 hoursCustom schedule
ScalabilitySingle unit, linear scalingUp to 100 nodes10,000+ node evolution
Domestic HW Compatible
On-Premise Data
Reference Price¥98K / ¥2,000/mo¥2–5 million¥5 million+

Financing Lease

Starting from just ¥2,000/month for 1P of computing power

Equivalent to approximately ¥40,000 in ChatGPT token credits — ideal for enterprises and institutions looking to adopt local AI capabilities with minimal upfront investment.

Find the Right WeCalc Product for You

Our expert team will recommend the best solution based on your actual needs

Free Consultation