cTuning & MLCommons Collective Knowledge Challenges

[ Back ]

Work with the community to find the most efficient CPUs (Intel/AMD/Arm) for BERT and MobileNets/EfficientNets (latency, throughput, accuracy, number of cores, frequency, memory size, cost and other metrics)

Open date: 2023 Jul 25

Closing date: 2023 Aug 17

Collective Knowledge Contributor award: Yes


Introduction

The goal of this MLPerf@home challenge is to help the community find the most efficient CPU (Intel/AMD/Arm) for BERT-99 model with DeepSparse engine and different variations of MobileNets/EfficientNets with TFLite in terms of latency, throughput, accuracy, number of cores, frequency, memory size, cost, and other metrics.

We would like to ask you to run a few MLPerf inference benchmarks with BERT and MobileNets/EfficientNets on one or more systems with different CPUs that you have an access to: laptops, servers, cloud instances...

You will be able to run benchmarks, collect all metrics and submit results in an automated way in a native environment or Docker container using the portable and technology-agnostic MLCommons Collective Mind automation language (CM).

Your name and benchmark submissions will be published in the official MLCommons inference v3.1 results on September 1, 2023 (submission deadline: August 4, 2023), will be published in the official leaderboard, will be included to the prize draw, and will be presented in our upcoming ACM/HiPEAC events.

Please report encountered problems using GitHub issues to help the community improve CM automation workflows to run MLPerf benchmarks on any system with any software/hardware stack.

Thank you in advance for helping the community find Pareto-efficient AI/ML Systems!

Minimal requirements

Instructions to run benchmarks and submit results

You can run any of these benchmarks or all depending on available time:

Results

All accepted results with submitter names will be publicly available at the official MLCommons website and in the Collective Knowledge explorer (MLCommons CK) along with the reproducibility and automation report to help the community build efficient AI/ML systems.

Organizers

Advanced challenges

If you feel that running these benchmarks was relatively easy, please try more advanced challenges, read about our plans and long-term vision, check CM documentation and run other MLPerf benchmarks.


Self link