Efficient Sparse MLPs Through Motif-Level Optimization Under Resource Constraints

Open Access
Authors
Publication date 10-2025
Journal AI
Article number 266
Volume | Issue number 6 | 10
Number of pages 24
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
We study motif-based optimization for sparse multilayer perceptrons (MLPs), where weights are shared and updated at the level of small neuron groups (‘motifs’) rather than individual connections. Building on Sparse Evolutionary Training (SET), our approach reduces the number of unique parameters and redundant multiply–accumulate operations by exploiting block-structured sparsity. Across Fashion-MNIST and a lung X-ray dataset, our Motif-SET improves training/inference efficiency with modest accuracy trade-offs, and we provide a principled recipe to choose motif size based on accuracy and efficiency budgets. We further compare against representative modern sparse training and compression methods, analyze failure modes such as overly large motifs, and outline real-world constraints on mobile/embedded targets. Our results and ablations indicate that motif size 𝑚=2 often offers a strong balance between compute and accuracy under resource constraints.
Document type Article
Language English
Published at https://doi.org/10.3390/ai6100266
Downloads
Permalink to this page
Back