Talk by Kenny Schlegel of the Chemnitz University of Technology, Chemnitz, Germany. Given at the Redwood Center for Theoretical Neuroscience at UC Berkeley.
Vector Symbolic Architectures (VSAs) combine a high-dimensional vector space with a set of carefully designed operators in order to perform symbolic computations with large numerical vectors. Major goals are the exploitation of their representational power and ability to deal with fuzziness and ambiguity. The basis of a VSA are the high-dimensional vectors, which can represent entities or data as symbols. Based on those vectors and the operators, it is possible to create compositional structures without losing the underlying original symbols and their relations. The principles of a VSA have already been applied in several applications, mostly with the simple structure of superimposed role-filler-pairs.
In this talk, I will first give an overview of our VSA comparison , in which different existing VSA implementations were compared experimentally.
Second, I explain our experience in applying VSAs in computer vision and signal processing, specifically visual place recognition and time series classification. There, we also build upon the structure of superimposed role-filler-pairs and were able to use them to improve existing algorithms. For example, in the field of visual place recognition, we can enrich the descriptor vector of an image with additional information, such as spatial semantic information, without increasing the resulting vector representation . This saves computational costs and can increase the performance.
In another application, we integrated the principles of a VSA into a state-of-the-art time series classification algorithm to provide explicit global time encoding . This prevents the original method from failing in special cases where global context is important to distinguish signals. Moreover, this temporal coding can also improve results on multiple datasets from a benchmark ensemble of time series classification.
 Neubert, P., Schubert, S., Schlegel, K. & Protzel, P. (2021) Vector Semantic Representations as Descriptors for Visual Place Recognition. In Proc. of Robotics: Science and Systems (RSS). DOI: 10.15607/RSS.2021.XVII.083, Online: http://www.roboticsproceedings.org/rss17/p083.pdf
 Schlegel, K., Neubert, P. & Protzel, P. (2022) HDC-MiniROCKET: Explicit Time Encoding in Time Series Classification with Hyperdimensional Computing. In Proc. of International Joint Conference on Neural Networks (IJCNN). (to appear, early access: https://arxiv.org/pdf/2202.08055.pdf)