High-Performance CPU
8-core processor running at 2.65 GHz with a high-bandwidth system design exceeding 100 GB/s memory bandwidth. Supports up to 64GB LPDDR5/5X.
Desktop-Level Supercomputing. Pocket-Sized Power.
Up to 370 TOPS of AI computing power in a compact 152mm × 152mm form factor. Run local LLMs, accelerate edge inference, and deploy AI workloads — all from your desk.
The NexTune CryGen200 combines a powerful CPU, GPU, and programmable NPU into a single compact chassis, delivering enterprise-grade AI performance at the edge.
8-core processor running at 2.65 GHz with a high-bandwidth system design exceeding 100 GB/s memory bandwidth. Supports up to 64GB LPDDR5/5X.
Supports high-performance 3D rendering, large language model (LLM) inference, visual large models, and multimodal AI inference workloads.
A dedicated neural network processor accelerates multimodal data processing — including speech and image — for models like YOLO and ResNet.
Integrated CPU+GPU+NPU delivers 50 TOPS. Expand with optional M.2 computing power cards for up to 320 additional TOPS — totaling 370 TOPS (INT8).
Compatible with the CUDA ecosystem with a full migration toolchain. Supports mainstream open-source LLMs and popular AI training and inference frameworks.
Equipped with 4× USB 3.0, USB Type-C, HDMI 1.4, DisplayPort 1.4, 2× Gigabit Ethernet, Wi-Fi 6E + Bluetooth, 3.5mm audio, and expandable M.2 slots.
The CryGen200 is built around a precision-machined metallic grey aluminum enclosure, measuring just 152mm × 152mm × 45.8mm. Its active cooling system ensures sustained peak performance even under demanding AI workloads.
The device supports both upright and horizontal placement, making it adaptable to any workspace. The included octagonal stand base provides stability and improved airflow.
Complete technical details for the NexTune CryGen200 AI Mini PC.
| Model | CryGen200 |
|---|---|
| Brand | NexTune |
| Form Factor | AI Computing Box |
| Color | Metallic Grey / Quicksilver |
| Dimensions | 152mm × 152mm × 45.8mm |
| Base Dimensions | 116.4mm × 94.8mm × 8.5mm |
| Cooling | Active Cooling |
| Operating System | Ubuntu |
| Operating Temperature | -10°C to +45°C |
| Storage Temperature | -20°C to +55°C |
| CPU | 8-core, 2.65 GHz |
|---|---|
| Memory Bandwidth | >100 GB/s |
| RAM | 32GB LPDDR5 (16GB / 64GB optional) |
| Storage | 1TB M.2 SSD 2280 |
| NPU | Programmable Neural Network Processor |
| Integrated Computing Power | 50 TOPS (CPU+GPU+NPU) |
| Max Computing Power | Up to 370 TOPS (INT8) with expansion card |
| Ethernet | 2× RJ45 (10/100/1000 Mbps) |
|---|---|
| Wi-Fi | Wi-Fi 6E (on-board, 3 antennas) |
| Bluetooth | Bluetooth (USB) |
| USB Type-A | 4× USB 3.0 Type-A |
| USB Type-C | 1× USB 3.0 Type-C |
| Video Output | 1× HDMI 1.4, 1× DisplayPort 1.4 |
| Audio | 3.5mm 4-segment jack |
| Power Input | DC5525, 19V, 90W+ |
| Expansion | Duo M.2 / Standard M.2 2280 Power Card |
| Indicators | 2× dual-color status LEDs |
| Power Adapter | 19V 90W, DC cable 1.5m, 18AWG, DC5525 |
|---|---|
| AC Power Cord | 3-pin, 3×0.75mm², 1.2m |
| HDMI Cable | High-definition, 1.2m |
| Documentation | Product specification manual (1 copy) |
From private LLM deployment to industrial edge inference, the CryGen200 adapts to your needs.
Run large language models entirely on-premises. No cloud dependency, no data privacy concerns. The CryGen200's 370 TOPS and CUDA-compatible ecosystem make local LLM inference fast and practical.
Deploy AI inference at the network edge, reducing latency and bandwidth costs. Dual Gigabit Ethernet and Wi-Fi 6E ensure reliable connectivity in demanding industrial and commercial environments.
Power intelligent office applications — from real-time speech recognition and document analysis to computer vision and automated workflows — all from a device that fits on any desk.
An ideal platform for AI research and education. Students and researchers can train models, run inference experiments, and explore AI frameworks on a cost-effective, self-contained device.
A comprehensive set of ports and wireless options to connect to any environment.
The NexTune CryGen200 is a compact, high-performance AI computing terminal designed for local large language model (LLM) deployment, edge computing, smart office, and AI education. It delivers up to 370 TOPS of integrated computing power in a 152mm × 152mm × 45.8mm form factor.
The CryGen200 delivers up to 370 TOPS (INT8) of total computing power: 50 TOPS from the integrated CPU+GPU+NPU, plus up to 320 TOPS from optional M.2 computing power card modules.
Yes. The CryGen200 is compatible with the CUDA ecosystem and provides a full migration toolchain. It supports mainstream open-source LLMs and popular AI training and inference frameworks, enabling rapid deployment without re-writing existing code.
The standard configuration ships with 32GB LPDDR5 RAM and 1TB M.2 SSD. Optional configurations with 16GB or 64GB RAM are available. The dual M.2 Key M 2280 slots also allow expansion with additional computing power card modules.
The CryGen200 ships with Ubuntu Linux, providing a developer-friendly environment for AI workloads, LLM deployment, and edge computing applications.
Each CryGen200 ships with: the device unit, a 19V 90W power adapter (DC5525, 1.5m cable), an AC power cord (1.2m), an HDMI cable (1.2m), and a product specification manual.
Contact our team for pricing, bulk order inquiries, technical consultation, or partnership opportunities. We respond within 24 hours.
GREAT VISION TECHNOLOGY PTE. LTD.
Registered in Singapore