Research team unveils advancement in neural networks

Research team unveils advancement in neural networks

February 13, 2026

Share

Imagine teaching an AI to recognize the shape of a story rather than memorizing every line. That’s the unlikely insight from a Queen’s University team whose paper in Nature Communications introduces “sufficient training” – a method that intentionally steers neural networks away from perfect optimization so they learn the underlying signal, not the noise.

The result may feel paradoxical: models trained “well enough” and pooled as a diverse ensemble often outperform their supposedly optimal counterparts. The researchers – co-leads master’s student Irina Babayan and PhD recipient Hazhir Aliahmadi with Professor Greg van Anders (Department of Physics, Engineering Physics, and Astronomy) – frame it as shifting from memorization to genuine learning, an emergence-driven effect where the collective is smarter than any single model.

For the tech sector, the implications are immediate: more robust AI with far less data and compute, suited to big data challenges using transformers as well as privacy-sensitive, low-data arenas like rare-disease diagnosis, fraud detection, and certain finance tasks. It also reframes conversations about model governance, efficiency, and deployment strategy.

“There’s a lot of work in the social sciences showing that diverse teams reach better outcomes,” explains Dr. van Anders. “This work shows that this intuition about people also holds for neural networks. We find that a diverse collection of neural networks substantially outperforms an individual network, or a non-diverse collection of networks.” 

To arrange an interview, contact:

Andrew Carroll | Media Relations Officer | Queen’s University