SHANGHAI--(BUSINESS WIRE)--On January 24th, at the "New Architecture of Large Language Model", Rock AI (a subsidiary of Shanghai Stonehill Technology Co., Ltd.) officially unveiled the first domestic general-purpose large language model without an Attention mechanism—the Yan Model. It is also one of the rare large models in the industry that does not rely on a Transformer architecture. The Yan Model offers a training efficiency that is 7 times higher than that of Transformer models with equivalent parameters, 5 times the inference throughput, and 3 times the memory capacity. Additionally, it supports lossless operation on CPUs, reduced hallucination in expressions, and 100% support for private deployment applications.
At the meeting, Liu Fanping, the CEO of Rock AI delivered a speech: "We hope that the Yan architecture can serve as the infrastructure for the artificial intelligence field, and to establish a developer ecosystem in the AI domain. Ultimately, we aim to enable anyone to use general-purpose large models on any device, providing more economical, convenient, and secure AI services, and to promote the construction of an inclusive artificial intelligence future."
The Transformer, as the foundational architecture for large models such as ChatGPT, has achieved significant success, but it still has many shortcomings, including high computational power consumption, extensive memory usage, high costs, and difficulties in processing long sequence data. To address these issues, the Yan Model replaces the Transformer architecture with a newly developed generative "Yan Architecture" of its own. This architecture enables lossless inference of infinitely long sequences on consumer-grade CPUs, achieving the performance effects of a large model with hundreds of billions of parameters using only tens of billions of parameters, and meets the practical needs of enterprises for low-cost, easy deployment of large models.
At the press conference, the research team presented a wealth of empirical comparisons between the Yan Model and a Transformer model of the same parameter scale. The experimental data showed that under the same resource conditions, the Yan architecture's model has a training efficiency and inference throughput that are respectively 7 times and 5 times higher than those of the Transformer architecture, and its memory capacity is improved by 3 times. In response to the long-sequence challenge faced by the Transformer, the Yan Model also performs excellently, theoretically capable of achieving inference of unlimited length.
Additionally, the research team has pioneered a reasonable associative feature function and memory operator, combined with linear computation methods, to reduce the complexity of the model's internal structure. The newly architected Yan Model will attempt to open up the previously "uninterpretable black box" of natural language processing, aiding the widespread application of large models in high-risk areas such as healthcare, finance, and law. At the same time, the hardware advantage of the Yan Model, which can run on mainstream consumer-grade CPUs without compression or pruning, also significantly broadens the possibilities for large models to be deployed across various industries.
Liu Fanping stated, "In the next phase, Rock AI aims to create a full-modality real-time human-computer interaction system, achieve end-side training, and integrate training and inference. We plan to fully connect perception, cognition, decision-making, and action to construct an intelligent loop for general artificial intelligence. This will provide more options for the foundational platform of large models in research areas such as general-purpose robots and embodied intelligence."