![]() |
市场调查报告书
商品编码
1660087
AI基础模式与汽车领域的用途(2024年~2025年)Research Report on AI Foundation Models and Their Applications in Automotive Field, 2024-2025 |
推理能力提升了底层模型的表现
自2024年下半年以来,国内外基础模型公司纷纷推出推理模型,并利用Chain-of-Thought(CoT)等推理框架,增强基础模型处理复杂任务和自主决策的能力。
这次重点发布推理模型,旨在强化底层模型处理复杂场景的能力,为Agent应用奠定基础。例如,这可能包括在复杂语义背景下提高驾驶舱助理的意图识别,或提高自动驾驶规划和决策中时空预测的准确性。
2024年,汽车搭载基础模型的主流推理技术主要围绕CoT及其变体,如思考树(ToT)、思考图(GoT)、思考森林(FoT),结合生成模型(如扩散模型)、知识图谱、因果推理模型、累积推理、不同情境下的多模态推理链。
例如,吉利提出的模组化思维语言模型(MeTHanol),使底层模型能够合成人类思维并监督LLM的隐藏层,产生类似人类的思维行为并适应日常对话和个性化提示,从而增强大规模语言模型的思考和推理能力,提高可解释性。
2025年,推理技术的重点将转向多模态推理。常见的训练技术包括指令微调、多模态情境学习和多模态 CoT(M-CoT),通常透过结合多模态融合对齐和 LLM 推理技术来实现。
可解释性建立了人工智慧和使用者之间的信任。
使用者需要信任AI,才能体验到它的 "用处" 。 2025年,人工智慧系统的可解释性将成为增加汽车人工智慧用户群的主要因素。可以透过展示较长的 CoT 来解决这个课题。
人工智慧系统的可解释性可以在三个层面实现:资料可解释性、模型可解释性和事后可解释性。
以理想汽车为例,其L3级自动驾驶采用“AI推理可视化技术”,直观呈现端到端+VLM模型的思维过程,涵盖从物理世界的感知输入,到基础模型输出的驾驶决策的全过程,增强用户对智能驾驶系统的信任。
在理想汽车的 "AI推理视觉化技术" 中
注意力系统显示车辆感知的交通和环境讯息,透过即时视讯串流评估交通参与者的行为,并以热力学图的方式展示评估目标。
端到端(E2E)模型展示了驾驶轨迹输出背后的思考过程。该模型考虑了各种驾驶轨迹,提出了10种可能的输出结果,最终采用最可能的输出结果作为驾驶轨迹。
视觉语言模型 (VLM) 提供了基于对话的感知、推理和决策视图。
各种推理模型的对话式介面同样采用长 CoT 来分解推理过程。例如在DeepSeek R1中,与使用者对话时,先使用CoT呈现每个节点所做的决策,然后用自然语言提供解释。
此外,大多数推理模型,例如智普的GLM-Zero-Preview、阿里巴巴的QwQ-32B-Preview和Skywork 4.0 o1,都支援长CoT推理过程的演示。
本报告提供中国的汽车产业的相关调查,提供AI基础模式概要,种类,通用技术,企业,汽车的应用案例等资讯。
Research on AI foundation models and automotive applications: reasoning, cost reduction, and explainability
Reasoning capabilities drive up the performance of foundation models.
Since the second half of 2024, foundation model companies inside and outside China have launched their reasoning models, and enhanced the ability of foundation models to handle complex tasks and make decisions independently by using reasoning frameworks like Chain-of-Thought (CoT).
The intensive releases of reasoning models aim to enhance the ability of foundation models to handle complex scenarios and lay the foundation for Agent application. In the automotive industry, improved reasoning capabilities of foundation models can address sore points in AI applications, for example, enhancing the intent recognition of cockpit assistants in complex semantics and improving the accuracy of spatiotemporal prediction in autonomous driving planning and decision.
In 2024, reasoning technologies of mainstream foundation models introduced in vehicles primarily revolved around CoT and its variants (e.g., Tree-of-Thought (ToT), Graph-of-Thought (GoT), Forest-of-Thought (FoT)), and combined with generative models (e.g., diffusion models), knowledge graphs, causal reasoning models, cumulative reasoning, and multimodal reasoning chains in different scenarios.
For example, the Modularized Thinking Language Model (MeTHanol) proposed by Geely allows foundation models to synthesize human thoughts to supervise the hidden layers of LLMs, and generates human-like thinking behaviors, enhances the thinking and reasoning capabilities of large language models, and improves explainability, by adapting to daily conversations and personalized prompts.
In 2025, the focus of reasoning technology will shift to multimodal reasoning. Common training technologies include instruction fine-tuning, multimodal context learning, and multimodal CoT (M-CoT), and are often enabled by combining multimodal fusion alignment and LLM reasoning technologies.
Explainability bridges trust between AI and users.
Before users experience the "usefulness" of AI, they need to trust it. In 2025, the explainability of AI systems therefore becomes a key factor in increasing the user base of automotive AI. This challenge can be addressed by demonstrating long CoT.
The explainability of AI systems can be achieved at three levels: data explainability, model explainability, and post-hoc explainability.
In Li Auto's case, its L3 autonomous driving uses "AI reasoning visualization technology" to intuitively present the thinking process of end-to-end + VLM models, covering the entire process from physical world perception input to driving decision outputted by the foundation model, enhancing users' trust in intelligent driving systems.
In Li Auto's "AI reasoning visualization technology":
Attention system displays traffic and environmental information perceived by the vehicle, evaluates the behavior of traffic participants in real-time video streams and uses heatmaps to display evaluated objects.
End-to-end (E2E) model displays the thinking process behind driving trajectory output. The model thinks about different driving trajectories, presents 10 candidate output results, and finally adopts the most likely output result as the driving path.
Vision language model (VLM) displays its perception, reasoning, and decision-making processes through dialogue.
Various reasoning models' dialogue interfaces also employ a long CoT to break down the reasoning process as well. Examples include DeepSeek R1 which during conversations with users, first presents the decision at each node through a CoT and then provides explanations in natural language.
Additionally, most reasoning models, including Zhipu's GLM-Zero-Preview, Alibaba's QwQ-32B-Preview, and Skywork 4.0 o1, support demonstration of the long CoT reasoning process.
DeepSeek lowers the barrier to introduction of foundation models in vehicles, enabling both performance improvement and cost reduction.
Does the improvement in reasoning capabilities and overall performance mean higher costs? Not necessarily, as seen with DeepSeek's popularity. In early 2025, OEMs have started connecting to DeepSeek, primarily to enhance the comprehensive capabilities of vehicle foundation models as seen in specific applications.
In fact, before DeepSeek models were launched, OEMs had already been developing and iterating their automotive AI foundation models. In the case of cockpit assistant, some of them had completed the initial construction of cockpit assistant solutions, and connected to cloud foundation model suppliers for trial operation or initially determined suppliers, including cloud service providers like Alibaba Cloud, Tencent Cloud, and Zhipu. They connected to DeepSeek in early 2025, valuing the following:
Strong reasoning performance: for example, the R1 reasoning model is comparable to OpenAI o1, and even excels in mathematical logic.
Lower costs: maintain performance while keeping training and reasoning costs at low levels in the industry.
By connecting to DeepSeek, OEMs can really reduce the costs of hardware procurement, model training, and maintenance, and also maintain performance, when deploying intelligent driving and cockpit assistants:
Low computing overhead technologies facilitate high-level autonomous driving and technological equality, which means high performance models can be deployed on low-compute automotive chips (e.g., edge computing unit), reducing reliance on expensive GPUs. Combined with DualPipe algorithm and FP8 mixed precision training, these technologies optimize computing power utilization, allowing mid- and low-end vehicles to deploy high-level cockpit and autonomous driving features, accelerating the popularization of intelligent cockpits.
Enhance real-time performance. In driving environments, autonomous driving systems need to process large amounts of sensor data in real time, and cockpit assistants need to respond quickly to user commands, while vehicle computing resources are limited. With lower computing overhead, DeepSeek enables faster processing of sensor data, more efficient use of computing power of intelligent driving chips (DeepSeek realizes 90% utilization of NVIDIA A100 chips during server-side training), and lower latency (e.g., on the Qualcomm 8650 platform, with computing power of 100TOPS, DeepSeek reduces the inference response time from 20 milliseconds to 9-10 milliseconds). In intelligent driving systems, it can ensure that driving decisions are timely and accurate, improving driving safety and user experience. In cockpit systems, it helps cockpit assistants to quickly respond to user voice commands, achieving smooth human-computer interaction.
Definitions