TY - JOUR
T1 - Classification Using Hyperdimensional Computing
T2 - A Review
AU - Ge, Lulu
AU - Parhi, Keshab K.
N1 - Publisher Copyright:
© 2001-2012 IEEE.
PY - 2020/4/1
Y1 - 2020/4/1
N2 - Hyperdimensional (HD) computing is built upon its unique data type referred to as hypervectors. The dimension of these hypervectors is typically in the range of tens of thousands. Proposed to solve cognitive tasks, HD computing aims at calculating similarity among its data. Data transformation is realized by three operations, including addition, multiplication and permutation. Its ultra-wide data representation introduces redundancy against noise. Since information is evenly distributed over every bit of the hypervectors, HD computing is inherently robust. Additionally, due to the nature of those three operations, HD computing leads to fast learning ability, high energy efficiency and acceptable accuracy in learning and classification tasks. This paper introduces the background of HD computing, and reviews the data representation, data transformation, and similarity measurement. The orthogonality in high dimensions presents opportunities for flexible computing. To balance the tradeoff between accuracy and efficiency, strategies include but are not limited to encoding, retraining, binarization and hardware acceleration. Evaluations indicate that HD computing shows great potential in addressing problems using data in the form of letters, signals and images. HD computing especially shows significant promise to replace machine learning algorithms as a light-weight classifier in the field of internet of things (IoTs).
AB - Hyperdimensional (HD) computing is built upon its unique data type referred to as hypervectors. The dimension of these hypervectors is typically in the range of tens of thousands. Proposed to solve cognitive tasks, HD computing aims at calculating similarity among its data. Data transformation is realized by three operations, including addition, multiplication and permutation. Its ultra-wide data representation introduces redundancy against noise. Since information is evenly distributed over every bit of the hypervectors, HD computing is inherently robust. Additionally, due to the nature of those three operations, HD computing leads to fast learning ability, high energy efficiency and acceptable accuracy in learning and classification tasks. This paper introduces the background of HD computing, and reviews the data representation, data transformation, and similarity measurement. The orthogonality in high dimensions presents opportunities for flexible computing. To balance the tradeoff between accuracy and efficiency, strategies include but are not limited to encoding, retraining, binarization and hardware acceleration. Evaluations indicate that HD computing shows great potential in addressing problems using data in the form of letters, signals and images. HD computing especially shows significant promise to replace machine learning algorithms as a light-weight classifier in the field of internet of things (IoTs).
UR - http://www.scopus.com/inward/record.url?scp=85086036544&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85086036544&partnerID=8YFLogxK
U2 - 10.1109/MCAS.2020.2988388
DO - 10.1109/MCAS.2020.2988388
M3 - Review article
AN - SCOPUS:85086036544
SN - 1531-636X
VL - 20
SP - 30
EP - 47
JO - IEEE Circuits and Systems Magazine
JF - IEEE Circuits and Systems Magazine
IS - 2
M1 - 9107175
ER -