Abstract

Transformers excel at discovering patterns in sequential data, yet their fundamental limitations and learning mechanisms remain crucial topics of investigation. In this paper, we study the ability of Transformers to learn pseudo-random number sequences from linear congruential generators (LCGs), defined by the recurrence relation xt+1=axt+cmodm. Our analysis reveals that with sufficient architectural capacity and training data variety, Transformers can perform in-context prediction of LCG sequences with unseen moduli (m) and parameters (a,c). Through analysis of embedding layers and attention patterns, we uncover how Transformers develop algorithmic structures to learn these sequences in two scenarios of increasing complexity. First, we analyze how Transformers learn LCG sequences with unseen (a,c) but fixed modulus, and we demonstrate successful learning up to m=232. Our analysis reveals that models learn to factorize the modulus and utilize digit-wise number representations to make sequential predictions. In the second, more challenging scenario of unseen moduli, we show that Transformers can generalize to unseen moduli up to mtest=216. In this case, the model employs a two-step strategy: first estimating the unknown modulus from the context, then utilizing prime factorizations to generate predictions. For this task, we observe a sharp transition in the accuracy at a critical depth =3. We also find that the number of in-context sequence elements needed to reach high accuracy scales sublinearly with the modulus.

Publication Details
Publication Type
Journal Article
Year of Publication
2025
URL
https://arxiv.org/abs/2502.10390
Journal
https://arxiv.org/abs/2502.10390
Contributors
Groups
Date Published
02/2025