OpenBEATs: A Fully Open-Source General-Purpose Audio Encoder

1 Carnegie Mellon University, USA
2 National Institute of Advanced Industrial Science and Technology (AIST), Japan
WASPAA 2025
Main Figure

OpenBEATs is a completely open-source model that scales up BEATs to handle multiple sound domains, including music, environmental sound, and bioacoustics.

Abstract

Masked token prediction has emerged as a powerful pre-training objective across language, vision, and speech, offering the potential to unify these diverse modalities through a single pre-training task. However, its application for general audio understanding remains underexplored with BEATs being the only notable example. BEATs has seen limited modifications due to the absence of open-source pre-training code. Furthermore, BEATs was trained only on AudioSet, restricting its broader downstream applicability. To address these gaps, we present OpenBEATs, an open-source framework that extends BEATs via multi-domain audio pre-training. We conduct comprehensive evaluations across six types of tasks, twenty five datasets, and three audio domains, including audio reasoning tasks such as audio question answering, entailment, and captioning. OpenBEATs achieves state-of-the-art performance on six bioacoustics datasets, two environmental sound datasets and five reasoning datasets, performing better than models exceeding a billion parameters at one-fourth their parameter size. These results demonstrate the effectiveness of multi-domain datasets and masked token prediction task to learn general-purpose audio representations. To promote further research and reproducibility, we release all pre-training and evaluation code, pretrained and fine-tuned checkpoints, and training logs.

Coming Soon

  • More checkpoints: We also pre-trained on 70k hours of speech and audio and achieved the best score using self-supervised modeling in the ICME 2025 Audio Encoder Challenge (see report, ICME checkpoints).
  • Integration with VERSA (see VERSA PR)
  • Detailed usage instructions. Meanwhile, you can use the pre-trained checkpoints by loading them into the ESPnet BEATs model at espnet2/asr/encoder/beats_encoder.py (see ESPnet PR).