The renowned benchmarking tool Geekbench has introduced a new cross-platform tool designed to evaluate device performance under AI-intensive workloads. Named Geekbench AI, this tool assesses the capabilities of a device's CPU, GPU, and NPU (Neural Network Processor) to handle machine learning applications.

Developed by Primate Labs, the creators of Geekbench, this software was previously known as Geekbench ML and released a preview version in 2021. Now renamed Geekbench AI, this change appears to align with the current surge in AI technology. To comprehensively explore the performance of different hardware in various AI-related tasks, Geekbench AI evaluates performance in terms of speed and accuracy, supporting multiple frameworks including ONNX, CoreML, TensorFlow Lite, and OpenVINO.

image.png

Geekbench AI's test results include three scores: full precision, half precision, and quantized precision. Primate Labs states that these scores also include accuracy measurements to assess how closely the workload outputs match real results, referred to as "the accuracy of the model in performing its intended tasks."

Currently, Geekbench AI is available on Windows, macOS, Linux, Android, and iOS platforms, allowing users to download and try it out. However, to fully understand the correlation between these test scores and actual task performance, more time is needed to test devices with native AI capabilities, such as the Copilot Plus PC and various new smartphones.

Unlike traditional frame rate or loading time tests, the introduction of Geekbench AI suggests that we may need to start focusing on new metrics such as the accuracy of predictive text or the performance of generative AI image editors. This reflects how AI technology is profoundly changing the way we evaluate device performance.

The launch of Geekbench AI undoubtedly sets a new standard for evaluating device AI capabilities. As more devices integrate AI features, the importance of such testing tools will become increasingly evident. It not only helps consumers better understand and compare the AI performance of different devices but also provides hardware manufacturers with reference metrics for optimizing AI performance.

However, it is also important to recognize that AI performance testing is still in its early stages. The correlation between Geekbench AI's test results and actual user experience, as well as how accurately it reflects device performance in different AI application scenarios, requires further observation and validation.

In the future, we may see more AI performance testing tools emerge, evaluating device AI capabilities from different perspectives. This trend reflects how AI technology is becoming an important dimension in judging device performance, on par with traditional CPU and GPU performance.