Skip to content
GCC AI Research

What Really Counts: Theoretical and Empirical Aspects of Counting Behaviour in Simple RNNs

MBZUAI · Notable

Summary

Nadine El Naggar from City, University of London presented research on RNN learning of counting behavior, formalizing it as Dyck-1 acceptance. Empirically, RNN models struggle to learn exact counting and fail on longer sequences, even when weights are correctly initialized. Theoretically, Counter Indicator Conditions (CICs) were proposed and proven necessary/sufficient for exact counting in single-cell RNNs, but experiments show these CICs are not found or are unlearned during training. Why it matters: This work highlights challenges in RNNs learning systematic tasks, suggesting gradient descent-based optimization may not achieve exact counting behavior with standard setups.

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Learning Time-Series Representations by Hierarchical Uniformity-Tolerance Latent Balancing

arXiv ·

The paper introduces TimeHUT, a new method for learning time-series representations using hierarchical uniformity-tolerance balancing of contrastive representations. TimeHUT employs a hierarchical setup to learn both instance-wise and temporal information, along with a temperature scheduler to balance uniformity and tolerance. The method was evaluated on UCR, UAE, Yahoo, and KPI datasets, demonstrating superior performance in classification tasks and competitive results in anomaly detection.