cTuning & MLCommons Collective Knowledge Challenges

[ Back ]

Develop a reference implementation of any MLPerf inference benchmark to run on Amazon Inferentia and submit to MLPerf inference v3.1+

Open date: 2023 Jul 4

Closing date: 2023 Aug 17

Collective Knowledge Contributor award: Yes


Challenge

Develop a reference implementation of any MLPerf inference benchmark to run on Amazon Inferentia. Submit preliminary (unoptimized) benchmarking results to MLPerf inference v3.1 and beyond.

Read this documentation to run reference implementations of MLPerf inference benchmarks using the CM automation language and use them as a base for your developments.

Check this ACM REP'23 keynote to learn more about our open-source project and long-term vision.

Prizes

Organizers

Results

All accepted results will be publicly available in the CM format with derived metrics in this MLCommons repository, in MLCommons Collective Knowledge explorer and at official MLCommons website.


Self link