In the big data era, data related to internet users is characterized by an increasing number of samples and dimensional characteristics. This data plays a pivotal role in diverse fields, including augmented reality, mixed reality, virtual reality, and the anticipated 6G services (Rani et al.,
2022). However, as the number of samples and characteristics expands, the complexity of data processing escalates. Hence, utilizing “low-loss” dimensionality reduction (DR) techniques to derive the best low-dimensional data has become a pressing concern in numerous burgeoning domains (Jia et al.,
2022). Current DR techniques mainly cater to datasets with many samples. They often directly address the sample’s characteristics, as seen in methods like Principal Component Analysis (PCA), Factor Analysis (FA), and Non-negative Matrix Factorization (NMF) (Roy et al.,
2022; Sawant & Bhurchandi,
2022; Yang et al.,
2022). These algorithms excel in extracting low-dimensional data from large sample sets. However, in high-dimensional datasets with limited samples, the number of characteristic dimensions exceeding the sample count results in a singular covariance matrix. When applied to high-dimensional data with a smaller sample size, their performance diminishes. Lasso, as a convex optimization method based on the L1 norm, possesses characteristics such as sparse characteristics, data simplification, and reserved subset reduction (Bhadra et al.,
2019). It can be applied to extract characteristics from high-dimensional datasets with small samples that exhibit multicollinearity. Though Lasso has found applications in areas such as image recognition and gene analysis (Deutelmoser et al.,
2021; Liu et al.,
2021), it has no explicit solution. It needs to rely on estimation algorithms for determining regression coefficients diminishes the interpretability of its result.