🧲 MAGNeT: Multimodal Adaptive Gaussian Networks for Intent Inference in Moving Target Selection across Complex Scenarios

Shandong University (SDU), Institute of Software Chinese Academy of Sciences (ISCAS), Hong Kong University of Science and Technology (HKUST), AiLF Instruments, Shandong Key Laboratory of Intelligent Electronic Packaging Testing and Application
ACM Multimedia 2025 (Oral)
Header Image

The motivation of our work is to adaptive single-factor uncertainty model to complex scenarios.

Abstract

Moving target selection in multimedia interactive systems faces unprecedented challenges as users increasingly interact across diverse, dynamic contexts—from live streaming in moving vehicles to VR gaming in varying environments. Existing approaches rely on probabilistic models that relate endpoint distribution to target properties (size, speed). However, these methods require substantial training data for each new context and lack transferability across scenarios, limiting their practical deployment in diverse multimedia environments where rich multimodal contextual information is readily available. This paper introduces MAGNeT (Multimodal Adaptive Gaussian Networks), which addresses these problems by combining classical statistical modeling with context-aware multimodal method. MAGNeT dynamically fuses pre-fitted Ternary-Gaussian models from various scenarios based on real-time contextual cues, enabling effective adaptation with minimal training data while preserving model interpretability. We take experiments on self-constructed 2D and 3D moving target selection datasets under in-vehicle vibration conditions. Extensive experiments demonstrate that MAGNeT achieves lower error rates with few-shot samples, by applying context-aware fusion of Gaussian experts from multi-factor conditions.

Paper

BibTeX

@article{li2025magnet,
  title={MAGNeT: Multimodal Adaptive Gaussian Networks for Intent Inference in Moving Target Selection across Complex Scenarios},
  author={Li, Xiangxian and Zheng, Yawen and Zhang, Baiqiao and Ma, Yijia and XianhuiCao, XianhuiCao and Liu, Juan and Bian, Yulong and Huang, Jin and Yang, Chenglei},
  journal={arXiv preprint arXiv:2508.12992},
  year={2025}
}