VitrX Limited is delighted to announce the acquisition of Wentworth.

This brings together two highly established and respected IT businesses with a combined experience of nearly 50 years,

Local AI development,
built for control and scale

Enable AI teams to prototype, fine tune and run inference locally with predictable performance and a clear path to scale.
Why upgrade

AI development is facing practical constraints

As organisations expand AI capability, development teams are encountering real-world limits. Cloud GPU pricing fluctuates. Shared environments introduce queue time. Governance requirements limit how sensitive data can be handled.
These constraints slow iteration and create uncertainty around cost, compliance and delivery timelines.
AI programmes rarely stall due to lack of ambition. They stall when development environments become unpredictable.

The challenge

Iteration speed is reduced by shared cluster contention and remote dependencies.

The risk

Volatile consumption pricing and governance concerns can delay projects and increase operational exposure.

The requirement

Organisations need controlled, always available AI development capacity aligned with production environments.

The solution

Lenovo ThinkStation PGX provides compact, office-ready AI compute powered by NVIDIA Grace Blackwell architecture, designed specifically for local AI workflows.

Why ThinkStation PGX

Explore how ThinkStation PGX combines AI performance, unified memory and deployment flexibility to support local AI development.

Optimised for modern AI workflows

Designed specifically for AI development, PGX combines CPU, GPU, high bandwidth unified memory and networking in a single compact system, supporting experimentation, fine tuning and inference without reliance on shared resources.
Software foundation

Linux native AI
stack

NVIDIA DGX OS based on Ubuntu Linux and the NVIDIA AI software stack, including CUDA 13, providing a consistent development environment aligned with larger scale infrastructure.
Performance

AI performance at the desk

Up to 1 PFLOP of AI performance powered by the NVIDIA GB10 Grace Blackwell Superchip, enabling high performance model development in a controlled local environment.

Compact and
energy efficient

A 1.13L small form factor with a maximum 240W power profile, designed for office and edge environments without specialist facilities.

Unified memory architecture

128GB LPDDR5X coherent unified system memory with 273GB/s bandwidth, designed to support model experimentation, inference and data intensive workflows.
Deployment flexibility

Flexible
deployment modes

Operate PGX in standalone, companion or cluster mode, enabling teams to work locally, offload GPU intensive tasks or link units together for increased model capacity.

Built for practical AI development

ThinkStation PGX supports common AI development and experimentation workflows in controlled local environments.
Development

Prototyping and fine tuning

Experiment with model architectures, refine parameters and validate outputs locally before scaling to larger infrastructure.
Data

Data science and model validation

Use GPU acceleration to support data preparation, analytics and testing workflows that benefit from high performance and unified memory.
Applications

Enterprise AI applications

Develop and test AI agents and internal tools that rely on dynamic context switching, proprietary datasets and governed environments.
Deployment

Edge and inference workloads

Deploy AI capability in office or edge locations where latency, data sensitivity or operational control are priorities, and run inference locally with responsive performance.

Start local. Scale with confidence.

ThinkStation PGX provides an entry point into a broader Lenovo AI infrastructure continuum.

Organisations can:

  • Begin with desk side AI development on PGX
  • Expand to ThinkStation P Series systems with additional GPU capacity
  • Scale further to Lenovo ThinkSystem servers supporting multiple GPUs per server
The consistent Linux native NVIDIA AI stack supports smoother transition from development to production environments.
Why choose us

This is where VitrX supports your AI roadmap

VitrX works with organisations to design AI capability that is practical, governed and aligned with budget cycles.
We support:
  • Workload assessment and architecture planning
  • Alignment with lifecycle and refresh strategies
  • Deployment and standardisation
  • A clear path from experimentation to scaled production
The objective is predictable AI capability, not reactive hardware decisions.

Ready to
move forward?

If you are planning AI capability over the next 12 to 18 months, ThinkStation PGX provides a controlled starting point with a defined path to scale.

Questions

Find answers to common questions about ThinkStation PGX and planning AI capability with VitrX.
What is Lenovo ThinkStation PGX designed for?
Lenovo ThinkStation PGX is designed to support local AI development workflows, including prototyping, fine tuning and inference. It provides high performance AI compute at the desk, enabling development teams to experiment and validate models in a controlled environment before scaling to larger infrastructure.
No. PGX is not positioned as a replacement for cloud or data centre AI infrastructure. It is designed to support the early and iterative stages of AI development locally. Organisations can then scale workloads to larger on-premise or cloud environments using a consistent software foundation when higher throughput is required.
PGX enables AI development to take place within a controlled local environment. This can help organisations manage sensitive datasets, proprietary code and regulated information without immediately moving them into shared or external environments. It supports governance discussions by keeping early-stage experimentation within defined boundaries.
Yes. PGX is built on the NVIDIA Grace Blackwell architecture and uses a Linux-based NVIDIA AI software stack. It is designed to align with broader Lenovo ThinkStation and ThinkSystem infrastructure, supporting a smoother transition from local development to scaled production environments.
VitrX works with organisations to assess AI use cases, map workloads to appropriate execution environments and align infrastructure decisions with lifecycle planning. We support selection, deployment and standardisation to ensure AI capability is introduced in a structured and predictable way.

Need more help?

Speak with the VitrX team about your requirements.