cTuning & MLCommons Collective Knowledge Challenges

[ Back ]

Run and optimize MLPerf inference v3.1 benchmarks with Neural Magic's DeepSparse library

Open date: 2023 Jul 4

Closing date: 2023 Aug 17

Collective Knowledge Contributor award: Yes


Challenge

Prepare, optimize and submit benchmarking results to MLPerf inference v3.1 using CM automation language with the DeepSparse library, any model and any platform.

Check this related challenge for more details.

Read this documentation to run reference implementations of MLPerf inference benchmarks using the CM automation language and use them as a base for your developments.

Check this ACM REP'23 keynote to learn more about our open-source project and long-term vision.

Prizes

Organizers

Results

All accepted results will be publicly available in the CM format with derived metrics in this MLCommons repository, in MLCommons Collective Knowledge explorer and at official MLCommons website.


Self link