The Problem with Coordinates#
The associative memory framework established earlier shows that almost every component of a modern deep learning system performs the same fundamental operation: it looks up values by comparing an input against stored keys. What that framework does not fully address is a subtler design constraint that becomes critical once the memory is recurrent — once it accumulates and forgets associations over time.
The constraint is this. A memory state decays. In the simplest designs, it decays channel-wise: each coordinate of the state is multiplied by its own forgetting rate . This is a coordinate operation — it acts on each basis direction of the state space independently, treating the standard basis vectors as the natural "features" to forget.
But there is no reason the standard basis should be the natural feature space. The model's learned representations are linear combinations of many coordinates at once. A single "entity identity" feature might be spread across dozens of dimensions; a single "syntactic role" feature might point diagonally through the embedding space. When the memory decays channel-wise, it does not forget entity identity a little — it shreds the feature into pieces and forgets each piece at an independent rate, leaving a garbled remainder that the model must then learn to reconstruct before it can do anything useful.
The right principle is: a model should be allowed to operate in directions, not coordinates. For any coordinate-sensitive operation — a channel-wise decay, an elementwise activation, a per-dimension gate — there must be enough linear mixing before and after that operation to ensure it acts on the model's actual learned features, not on whatever the coordinate axes happen to be. In a standard MLP, the weight matrix provides this mixing before the ReLU and provides it after. For a recurrent memory, the same logic demands that the decay act on learned directions in the state space, not on raw coordinates.
GammaNet is what emerges when you apply this demand precisely and ask what structures are left standing.
The Gated Recurrence#
Recall from the associative memory framework that linear attention collapses its key-value associations into a running matrix state with updates . Without any forgetting mechanism, this state accumulates all past associations with equal weight, regardless of how long ago they were written. [1, 2] The obvious remedy is a per-channel forgetting rate, proposed in Gated Linear Attention: [5, 6, 7]
where assigns a separate decay rate to each channel. Different channels can forget at different speeds — useful if, say, short-range syntactic features should be forgotten faster than long-range entity associations. The readout retrieves whatever the current state has stored in the direction of .
This is a reasonable first design. But the channel-wise decay has exactly the problem described above: it operates in coordinates, not directions.
Decaying in Directions: The Feature Map#
The fix is to replace the coordinate-wise decay with a decay that acts along learned directions. Introduce a fixed invertible matrix whose columns define the preferred decay directions:
The operator applies a change of basis into the -feature space, performs the coordinate-wise decay there, and then changes back. Its effect is to decay the state along the columns of at rates , rather than along the standard basis vectors. This is precisely the "linear mixing around the coordinate operation" that the introduction called for: mixes before the decay, acts coordinate-wise in feature space, and mixes back.
The model can now learn which directions in the state space correspond to features that should be forgotten quickly and which should persist — rather than being forced to align its internal representations with the standard basis or waste capacity on the coordinate transformation.
Folding F into the Weights#
Having motivated the feature map, a natural question is whether adding to (GLA) actually gives the model new expressive power. The answer, for this basic recurrence, is no.
Folding refers to the observation that a fixed linear map sandwiched between two learnable matrices can always be absorbed into those matrices without changing the model's function class. Adding a fixed rotation before the first layer of an MLP, for example, is equivalent to simply learning a rotated first-layer weight matrix — the function class is identical.
The same applies here. Define the change of representation and the modified projections
Then:
and . Model (GLA-F) is exactly equivalent to the standard (GLA), with modified projection matrices and . Since these are still arbitrary learnable matrices, vanishes into the weights and adds nothing.
This means that for the gated recurrence alone, the feature-decay motivation — while conceptually correct — is already satisfied for free. Any feature basis the model wants to operate in can be implicitly learned through the key and query projections, without ever appearing explicitly in the architecture.
The Missing Piece: Surgical Replacement#
The gated recurrence has a structural limitation beyond forgetting: it can only add new key-value associations on top of existing ones. When the model needs to update the value stored at a key direction it already knows about — to revise a belief, correct an entity attribute, or track a changing state — it cannot do so cleanly. The old association persists, corrupted by the new write.
The right operation is to first erase the old value before writing the new one. Suppose the old state has an association stored in some direction : reading it out gives . To remove exactly this association while preserving everything orthogonal to , subtract the rank-one outer product :
This annihilates the component of the state and leaves all orthogonal content intact. Allowing a partial erase controlled by and then writing the new value gives the delta rule: [2, 3, 4]
where is the normalized key used as the erase direction.
Combining temporal forgetting with surgical replacement gives the full recurrence known as Kimi Delta Attention that will be our starting point for GammaNet: [6, 7]
The first term decays old associations and erases the specific one about to be overwritten; the second term writes the new one.
Folding Fails for the Delta Rule#
Now apply the same feature-map upgrade: replace the channel-wise decay with :
Attempt the same folding. Define and . Then:
Expanding the middle factor:
This is a rank-one subtraction with different left and right factors: on the left and on the right. For this to be a symmetric projector — the only form the standard model can produce — we would need , which requires , i.e., is orthogonal (up to scaling). For any non-orthogonal , the erase term is a biorthogonal rank-one operator that no choice of projection matrix can reproduce from a symmetric projector.
Why does folding break here when it worked before? In the gated recurrence, the key appeared only once — in the write term — so the coordinate change absorbed cleanly. In the KDA recurrence, the key plays two roles: the erase direction and the write address . Changing coordinates transforms both simultaneously, but the erase term conjugates around the projector (left-multiplying and right-multiplying ) while the write term absorbs only on the right — structurally different transformations that leave a residual dependence on impossible to hide in projection weights. [4, 6, 7]
For the KDA recurrence, is not redundant. It genuinely changes what the model can compute, and architectural choices about matter.
An Even More Immediate Problem: Instability#
Before asking what can express, there is a more immediate concern. For general non-orthogonal , the KDA recurrence can be unstable: the state grows exponentially even with no writes.
Consider the homogeneous part of (KDA-F) with :
For orthogonal : (orthogonal maps preserve singular values), and for . Every step is non-expansive. For standard (KDA) with , this gives unconditional stability. [6, 7]
For non-orthogonal , this fails. Take:
Computing directly yields a matrix with spectral radius . Repeated application makes the state grow exponentially.
The root cause is a metric mismatch: is contractive only in the -induced norm , while the projector is non-expansive in the Euclidean norm. For non-orthogonal these norms are incompatible — the projector can amplify directions that the decay was supposed to contract.
This settles the question of whether non-orthogonal is merely a reparameterization of (KDA). Standard (KDA) is unconditionally stable; (KDA-F) with non-orthogonal is not. A stable model and an unstable model cannot be reparameterizations of each other. The feature basis is a genuine architectural choice with real consequences.
Deriving the Stable Feature Bases#
We want to characterize all fixed invertible for which (KDA-F) is non-expansive for every admissible , , and unit-norm .
Since always, the condition reduces to:
Write in column form with columns and in row form with rows . Setting (the -th coordinate projector) gives:
Since , Cauchy-Schwarz forces . The stability requirement demands this equals exactly 1, which by Cauchy-Schwarz equality requires . Combined with for (rows of are dual to columns of ), this forces for all : the columns of must be mutually orthogonal. This is the condition , which characterizes exactly:
for orthogonal and positive diagonal . This class is both necessary and sufficient for unconditional stability.
The Γ Parameterization#
With , the decay operator is (since diagonal matrices commute). The erase projector becomes:
where . Since , and is orthogonal (so it preserves norms):
where is simply a different learned matrix. The orthogonal factor folds into the erase projection — it is a gauge choice exactly as in the gated recurrence, and for the same reason: acts only as a fixed linear map adjacent to a learnable weight matrix. What cannot fold is , which appears asymmetrically (as on the left and on the right of the erase projector) and therefore cannot be absorbed by a single weight matrix.
Working in the -rotated basis (absorbed into all projection matrices), the GammaNet recurrence is:
with , , and a fixed learnable positive diagonal matrix. [6, 7]
The separately learned decouples where memory is addressed (via ) from which feature direction is erased (via ). Setting forces the model to use the same linear map for addressing and erasing — a meaningful structural constraint. Allowing them to differ lets the model address by entity identity and erase by attribute type.
Summary#
The path from the gated recurrence to GammaNet in three steps:
-
The gated recurrence decays in coordinates, not directions. Replacing channel-wise decay with a feature-basis decay is the conceptually correct fix — but for the gated recurrence alone, is redundant: it folds into the key and query projections with no change in expressive power. [1, 2, 5]
-
The delta rule breaks folding. Adding surgical replacement makes the key play two roles simultaneously. The coordinate change that absorbed in the gated case now transforms the erase and write terms differently, leaving a residual dependence on that no projection matrix can reproduce. More immediately, non-orthogonal makes the recurrence unstable — ruling out any claim that it is a reparameterization of the unconditionally stable baseline. [4, 6, 7]
-
Stability forces , and folds. The only feature bases guaranteeing unconditional stability are those with orthogonal columns — for orthogonal and positive diagonal . The orthogonal factor folds into the erase projection , leaving as the irreducible fixed feature geometry. cannot fold because it appears asymmetrically in the erase projector, and it is precisely this asymmetry that lets the model erase along learned feature directions rather than raw coordinates.
References#
[1] Katharopoulos, A., Vyas, A., Pappas, N., & Fleuret, F. (2020). Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.
[2] Schlag, I., Irie, K., & Schmidhuber, J. (2021). Linear Transformers Are Secretly Fast Weight Programmers.
[3] Widrow, B., & Hoff, M. E. (1960). Adaptive Switching Circuits.
[4] Yang, S., Wang, B., Zhang, Y., Shen, Y., & Kim, Y. (2024). Parallelizing Linear Transformers with the Delta Rule over Sequence Length.
[5] Yang, S., Wang, B., Shen, Y., Panda, R., & Kim, Y. (2024). Gated Linear Attention Transformers with Hardware-Efficient Training.
[6] Yang, S., Kautz, J., & Hatamizadeh, A. (2025). Gated Delta Networks: Improving Mamba2 with Delta Rule.
[7] Kimi Team, Zhang, Y., Lin, Z., et al. (2025). Kimi Linear: An Expressive, Efficient Attention Architecture.
Cite this post
@online{gamma-net,
author = {Lucas Sun},
title = {GammaNet - Stable Feature-Space Decay in Linear RNNs},
year = {2026},
month = {05},
day = {02},
url = {https://xtimecrystal.com/posts/260502-gamma-net/},
}