Hyperdimensional computing (HD), also known as vector symbolic architectures (VSA), is an emerging and promising paradigm for cognitive computing. At its core, HD/VSA is characterized by its distinctive approach to representing information using high-dimensional random vectors. The recent surge in research within this field gains momentum from its remarkable computational efficiency and ability to excel in few-shot learning scenarios. Nonetheless, the current literature is deficient in providing a comprehensive comparative analysis of various methodologies since each method uses a different benchmark to evaluate its performance. This gap obstructs the monitoring of the field's state-of-the-art advancements and acts as a significant barrier to its overall progress. To address this gap, this review not only offers a conceptual overview of the latest literature but also introduces a comprehensive comparative study of HD/VSA classification techniques. Our exploration starts with an overview of the strategies proposed to encode information as high-dimensional vectors. These vectors serve as integral components in the construction of classification models. Furthermore, we evaluate diverse classification methods as proposed in the existing literature. This evaluation encompasses techniques such as retraining and regenerative training to augment the model's performance. To conclude our study, we present a comprehensive empirical study. This study serves as an in-depth analysis, systematically comparing various HD/VSA classification approaches using two benchmarks, the first being a set of seven popular datasets used in HD/VSA and the second consisting of 121 datasets extracted from the UCI Machine Learning repository.To facilitate future research on classification with HD/VSA, we open-sourced the benchmarking and the implementations of the methods we review. Our findings yield significant insights. Firstly, encodings based on key-value pairs emerge as optimal choices, boasting superior accuracy while maintaining high efficiency. Secondly, iterative adaptive methods demonstrate remarkable efficacy, potentially complemented by a regenerative strategy depending on the specific problem. Moreover, we show how HD/VSA is able to generalize while training with a limited number of training instances. Lastly, we demonstrate the robustness of HD/VSA methodologies by subjecting the model memory to a large number of bit-flips. Our results illustrate that the model's performance remains reasonably stable until the occurrence of 40% of bit flips, where the model's performance is drastically degraded. Overall, we have performed a thorough performance evaluation on different methods and have observed a positive trend in both evaluation and execution time.