Journal of Beijing University of Posts and Telecommunications

  • EI核心期刊

Journal of Beijing University of Posts and Telecommunications ›› 2024, Vol. 47 ›› Issue (3): 103-110.

Previous Articles     Next Articles

A Neural Network Architecture for Learning Invariant Representations

  

  • Received:2023-06-28 Revised:2023-09-25 Online:2024-06-30 Published:2024-06-13

Abstract: The presence of distortions in data refers to the fact that different input feature vectors may represent the same entity, which is one of the long-standing difficulties in machine learning. The study of the above problem has spurred the development of invariant machine learning methods with the abilities such as ignoring translation, rotation, illumination, and pose changes in images, which typically use pre -defined invariant features or invariant kernels, and require the designers to carefully analyze the types of distortions that may exist in the data. While it is straightforward to discover the possible types of distortions in image data, it is difficult in other domains. Our goal is to learn invariant representations from non-image data based only on information about whether any two samples are distorted variants of the same entity, without any information of what the distortions present in the data. In theory, given a sufficiently large number of samples, standard neural network architectures should be capable of learning invariance from data. In practice, we experimentally find that standard neural networks are struggling to learn to approximate even simple type of invariant representation. Therefore, we propose a new extended layer with richer output representations that is better suited for learning invariance from data.

Key words: time series, translation invariance, neural networks, auto鄄encoders, extended layer

CLC Number: