We organize open benchmarking, optimization and reproducibility challenges in collaboration
with ACM, MLCommons, IEEE, NeurIPS, HiPEAC and the community since 2014.
Our goal is to connect industry, academia, students and the community
to learn how to build and run AI, ML and other emerging workloads in a more efficient
and cost-effective way (cost, latency, througput, accuracy, energy usage, size, etc)
across diverse and rapidly evolving models, datasets, software and hardware
using a common automation framework with technology-agnostic automation recipes for MLOps and MLPerf.
Learn more about our initiatives and long-term goals from our ArXiv white paper,
ACM REP keynote,
ACM TechTalk
and our Artifact Evaluation website.