Open date: 2023 Jul 4
Closing date: 2023 Aug 17
Collective Knowledge Contributor award: Yes
Develop a reference implementation of any MLPerf inference benchmark to run on the latest publicly available Google TPU. Submit preliminary (unoptimized) benchmarking results to MLPerf inference v3.1 and beyond.
Note that you can use either GCP TPU or Coral TPU USB-Accelerator CPU card. In the latter case, you can reuse and extend our CM-MLPerf script for MobileNets!
Read this documentation to run reference implementations of MLPerf inference benchmarks using the CM automation language and use them as a base for your developments.
Check this ACM REP'23 keynote to learn more about our open-source project and long-term vision.
All accepted results will be publicly available in the CM format with derived metrics in this MLCommons repository, in MLCommons Collective Knowledge explorer and at official MLCommons website.