cTuning & MLCommons Collective Knowledge Challenges

[ Back ]

Run and optimize MLPerf inference v3.1 benchmarks on Windows

Open date: 2023 Jul 4

Closing date: 2023 Aug 17

Collective Knowledge Contributor award: Yes


Challenge

Prepare, optimize and submit any benchmarking results to MLPerf inference v3.1 using CM automation language on Windows.

Read this documentation to run reference implementations of MLPerf inference benchmarks using the CM automation language and use them as a base for your developments.

Check this ACM REP'23 keynote to learn more about our open-source project and long-term vision.

Prizes

Organizers

Status

Open ticket: GitHub

Results

All accepted results will be publicly available in the CM format with derived metrics in this MLCommons repository, in MLCommons Collective Knowledge explorer and at official MLCommons website.


Self link