Speaker: 

Can M. Le

Institution: 

UC Davis

Time: 

Monday, November 17, 2025 - 2:00pm to 3:00pm

Location: 

340P Rowland Hall

Variational inference has been widely used in machine learning literature to fit various Bayesian models. In network analysis, this method has been successfully applied to solve community detection problems. Although these results are promising, their theoretical support is limited to relatively dense networks, an assumption that may not hold for real networks. In addition, recent studies have shown that the variational loss surface has many saddle points, which can significantly impact its performance, especially when applied to sparse networks. This paper proposes a simple method to improve the variational inference approach by hard thresholding the posterior of the community assignment after each iteration. We show that the proposed method can accurately recover the true community labels, even when the average node degree of the network is bounded and the initialization is arbitrarily close to random guessing. An extensive numerical study further confirms the advantage of the proposed method over the classical variational inference and other algorithms.