Open date: 2023 Jul 4
Closing date: 2024 Jan 4
Collective Knowledge Contributor award: Yes
Add CM interface to run MLPerf inference benchmarks on Intel-based platforms.
You can start from reproducing any past MLPerf inference submission from Intel and their partners and then adding CM automation.
Read this documentation to run reference implementations of MLPerf inference benchmarks using the CM automation language and use them as a base for your developments.
Check this ACM REP'23 keynote to learn more about our open-source project and long-term vision.
All accepted results will be publicly available in the CM format with derived metrics in this MLCommons repository, in MLCommons Collective Knowledge explorer and at official MLCommons website.