Researchers at the University of Surrey say they’ve improved AI performance and energy efficiency by redesigning neural networks to mirror how the brain connects nearby neurons. The approach—Topographical Sparse Mapping, with an enhanced, biologically inspired pruning step—reduces unnecessary connections while maintaining accuracy, according to a study in Neurocomputing. The team argues that the method could curb the hefty power demands of training large models used in generative AI, potentially trimming costs as the technology scales. They’re also exploring applications in neuromorphic hardware that could further shrink AI’s power footprint.
Related articles:
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
Towards spike-based machine intelligence with neuromorphic computing
Generating Long Sequences with Sparse Transformers
Pruning Convolutional Neural Networks for Resource Efficient Inference





























