Neuromorphic Computing · Spiking Neural Systems · Local Learning

Communication-Efficient Learning Without Backpropagation

A structured recurrent spiking framework for low-bandwidth, energy-aware intelligent systems.

Bo Tang, Weiwei Xie

Department of Electrical and Computer Engineering, Department of Computer Science, Worcester Polytechnic Institute

Paper Code

1. Sensors

Event camera, streaming biosignals, robotics sensors, or low-power edge inputs.

2. Structured Recurrent SNN

Local recurrent microcircuits with sparse small-world routing.

Layer 1: local recurrence
⋯ sparse long-range links ⋯
Layer K: recurrent dynamics

3. Modulatory Feedback

Random feedback projections compress task error into low-dimensional learning signals.

4. Decision

Readout population produces classification or control decisions in real time.

Figure 1. Proposed architecture: sensors produce sparse events; structured recurrent spiking layers process temporal signals; random feedback projections drive modulatory learning; the readout supports final decision making.

Abstract

Modern AI systems rely on backpropagation, which requires dense communication, large memory movement, and global synchronization. This project explores an alternative: learning through local synaptic updates guided by low-bandwidth global signals.

We introduce a structured recurrent spiking neural network that combines locally recurrent microcircuits, sparse small-world long-range connectivity, and neuromodulatory feedback. The goal is to test whether useful credit assignment can emerge without precise gradient transport, enabling learning systems whose cost scales with activity rather than network size. This framework is designed for temporal sensing tasks such as event-based vision, streaming signal classification, and low-power edge intelligence.

Core Idea

Local recurrent computation

Each layer forms a structured microcircuit that supports temporal memory and rich dynamics through local recurrence.

Sparse small-world routing

Long-range projections create efficient information flow without dense communication or all-to-all connectivity.

Low-bandwidth learning

Random feedback pathways and modulatory populations replace full gradient propagation with compressed learning signals.

Structured recurrent spiking neural network with local learning and low-bandwidth feedback
Figure 2. Method overview: structured recurrent spiking layers, small-world long-range projections, WTA teaching signals, broadcast alignment, modulatory feedback, eligibility traces, and local synaptic plasticity.

Method

The proposed model combines structured recurrent spiking dynamics, sparse small-world routing, and low-bandwidth modulatory feedback. The key mechanisms are summarized below using the notation from the paper.

Network Architecture

The network contains $K$ stacked recurrent layers. Each layer $L_k$ forms a locally dense recurrent microcircuit, receives feedforward input from $L_{k-1}$, and sends sparse long-range projections to the readout population.

$$ I_i^{(k)}(t) = \sum_{j\in L_k} w_{ij}^{\mathrm{rec}} S_j^{(k)}(t) + \sum_{j\in L_{k-1}} w_{ij}^{\mathrm{ff}} S_j^{(k-1)}(t) + \sum_j w_{ij}^{\mathrm{lr}} S_j(t) $$

Neuron Dynamics

Neurons follow discrete-time leaky integrate-and-fire dynamics, with a slow homeostatic threshold update to stabilize firing activity in recurrent layers.

$$ V_i(t+1) = \alpha V_i(t) + I_i(t) - V_{\mathrm{th},i}(t)S_i(t) $$ $$ S_i(t) = H\!\left(V_i(t)-V_{\mathrm{th},i}(t)\right) $$ $$ V_{\mathrm{th},i}(t+1) = V_{\mathrm{th},i}(t) + \eta_{\theta}\left(\bar S_i(t)-S_{\mathrm{target}}\right) $$

Winner-Take-All Output Teaching Signal

The readout population accumulates spike counts $z\in\mathbb{R}^C$. A sparse margin-based WTA signal provides class-level supervision without softmax normalization or gradient computation.

$$ c^* = \arg\max_c z_c, \qquad m = \max\left(0, \gamma + z_{c^*} - z_y\right) $$ $$ \delta_c = \begin{cases} +m, & c=c^* \neq y, \\ -m, & c=y \neq c^*, \\ 0, & \text{otherwise.} \end{cases} $$

Broadcast Alignment and Modulatory Populations

Output errors are broadcast through fixed random matrices and compressed by low-dimensional modulatory populations. These signals gate plasticity but do not inject feedback currents into neuronal dynamics.

$$ \tilde{\delta}_i(t) = \sum_{c=1}^{C} B_{ci}\delta_c(t) $$ $$ u_k(t)=\frac{1}{N_k}\sum_{i\in L_k}\tilde{\delta}_i(t), \qquad m_k(t+1)=\lambda_m m_k(t)+W_{e\to m}u_k(t) $$ $$ L_i(t)=\tanh\left(\left(W_{m\to h}m_k(t)\right)_i\right) $$

Eligibility Traces

Each plastic synapse stores a local eligibility trace that accumulates recent pre- and post-synaptic spike correlations and decays over time.

$$ e_{ij}(t+1)=\lambda_e e_{ij}(t)+S_j(t)S_i(t) $$

Synaptic Plasticity

Hidden-layer synapses are updated using a three-factor rule combining eligibility, modulatory feedback, and a local learning rate. Readout synapses use a local delta rule.

$$ \Delta w_{ij}(t) = -\eta \, e_{ij}(t)L_i(t) $$ $$ \Delta W_{\mathrm{out}} = -\eta_{\mathrm{out}} S^{\top}\delta $$

Preliminary Momentum

Initial experiments demonstrate stable local learning in structured recurrent spiking networks, supporting the feasibility of a larger proof-of-concept study.

>95%benchmark accuracy in preliminary tests
0backpropagation or surrogate gradients
Kmulti-layer recurrent SNN architecture
M ≪ Nlow-dimensional feedback signal

Why It Matters

Backpropagation has powered modern AI, but its dependence on dense communication makes it poorly matched to low-power, distributed, and event-driven environments. This project targets that bottleneck directly. If successful, it could support a new class of intelligent systems that learn continuously at the edge, adapt in real time, and map naturally onto neuromorphic hardware such as Loihi-class chips.