MicroCloud Hologram Inc. Announces Innovations in Anomaly Detection
SHENZHEN, China, Feb. 14, 2025 /PRNewswire/ -- MicroCloud Hologram Inc. (NASDAQ: HOLO), ("HOLO" or the "Company"), a technology service provider, announced the deep optimization of stacked sparse autoencoders through the DeepSeek open-source model, injecting new vitality into anomaly detection technology and providing an efficient solution.
Importance of Data Quality
Data quality is crucial for model performance, so the behavioral data collected in the data preprocessing stage typically contains multiple features with different dimensions and numerical ranges. To eliminate the dimensional influence between different features and improve the effectiveness of model training, HOLO uses a normalization processing method.
Normalization Process
Normalization is a common data preprocessing technique that scales the data to a specific range, typically between 0 and 1 or -1 and 1. By doing so, data from different features can be compared and analyzed on the same scale, avoiding the situation where certain features dominate model training due to their large value ranges. In HOLO's detection project, normalization not only improved the efficiency of model training but also laid a solid foundation for subsequent feature extraction.
Stacked Sparse Autoencoder Model
After the data preprocessing is completed, the next step is to input the processed data into the stacked sparse autoencoder model. The stacked sparse autoencoder is a powerful deep learning architecture composed of multiple autoencoder layers, with each layer responsible for extracting features at different levels. HOLO utilizes the DeepSeek model to dynamically adjust the strength and manner of the sparsity constraint, ensuring that the features learned by each layer of the autoencoder are sparse and representative.
Innovative Layer-Wise Training
HOLO has innovated and optimized the stacked sparse autoencoder by employing a greedy, layer-wise training approach, optimizing the parameters of each autoencoder layer step by step. The core of this layered training strategy is to first train the lower layers of the autoencoder to learn the basic features of the input data, then use the output of the lower-layer autoencoder as the input for the next layer, progressively extracting deeper features. This approach enhances the model's expressive power.
Denoising Training Approach
HOLO's stacked sparse autoencoder, trained with the DeepSeek model, adds noise to the input data, requiring the model to reconstruct the original input despite noisy interference. This denoising training approach encourages the model to learn more robust feature representations, enabling accurate anomaly detection in real-world scenarios.
Application of Dropout
In addition to denoising, HOLO also applies Dropout during the training process to reduce model overfitting. Dropout randomly drops a subset of neurons during training, preventing the model from relying on specific neurons to learn the features of the data.
Efficient Training with DeepSeek
The DeepSeek model utilizes a distributed computing framework, allocating training tasks across multiple computational nodes for parallel execution, significantly shortening training time. Using this pretraining + fine-tuning strategy accelerates model convergence and improves performance.
About MicroCloud Hologram Inc.
MicroCloud is committed to providing leading holographic technology services to its customers worldwide. Its offerings include high-precision holographic light detection and ranging (LiDAR) solutions, exclusive holographic LiDAR algorithms, and breakthrough imaging solutions. For more information, please visit http://ir.mcholo.com/.
Safe Harbor Statement
This press release contains forward-looking statements as defined by the Private Securities Litigation Reform Act of 1995. Forward-looking statements include plans, objectives, goals, strategies, and other statements that are not historical facts. For further details, please review the Company's filings with the SEC at www.sec.gov.