The test conducted by Dongchedi has once again captured public attention.
Recently, Dongchedi gathered 36 popular car models and performed over ten comparative tests on their assisted driving capabilities.
Traditional assisted driving tests, such as evaluating AEB (Autonomous Emergency Braking) in a closed course or automated parking in a lot, are common and familiar to most people.
However, the recent endeavor by Dongchedi, in collaboration with CCTV, focused entirely on real-world road conditions. This included city driving scenarios and, notably, a specially closed-off section of highway, a testing methodology rarely seen before. Furthermore, the difficulty of these tests was quite extreme.
For instance, one scenario involved a “vanishing lead vehicle.” While driving at the speed limit on the highway, a large truck ahead suddenly changed lanes to the left, revealing a broken-down vehicle. Simultaneously, the adjacent fast lane was filled with “actor” vehicles strategically blocking any lane change. This presented a critical decision point: brake or attempt a lane change, a truly test of limits.
Another challenging scenario was “aggressive cut-ins” on the highway. While in the leftmost lane, a vehicle from the right suddenly forced its way into the lane, leaving very little braking time for the car behind. This is often exacerbated by solid lane lines in merging areas.

Perhaps the most demanding test was the “wild boar collision” scenario. A simulated wild boar unexpectedly crossed the road at 10 km/h on the highway. In such a sudden and unpredictable situation, even a human driver might struggle to react effectively, let alone an assisted driving system.
Following the video release, some netizens even created a “wild boar collision ranking” based on the footage.
These dramatic visuals alone were enough to pique public interest, and Dongchedi certainly succeeded in generating considerable hype around this test.
However, the results were rather surprising. In highway accident scenario simulations, Tesla’s two models, the Model 3 and Model X, performed the best. Out of the six scenarios, the Model 3 failed the wild boar collision test but successfully navigated the other five. The Model X unfortunately failed the temporary construction zone scenario but passed the remaining five.
Brands known for their advanced assisted driving systems, such as Huawei, Li Auto, and Xpeng, did not achieve as high a pass rate as Tesla.
Upon seeing these results, Elon Musk immediately took to X (formerly Twitter) to boast, highlighting that this was achieved without localized training and expressing a goal for Tesla to pass all six tests in the future.
Certainly, Musk’s “humble brag” seemed well-earned, at least in the context of this specific test.
Conversely, HarmonyOS vehicles, AITO, and Luxeed opted for a “no comment” stance.
The stark difference in reactions between these parties suggests that there is considerable debate within the industry regarding the validity of this test. Many have pointed out that the testing standards were not uniform across all participants.
The most significant point of contention appears to be the “vanishing lead vehicle” scenario, primarily due to inconsistencies in vehicle speeds during the tests.
Dongchedi’s methodology was to set the assisted driving speed for all vehicles at a 10% offset from the speed limit. However, due to differing strategies implemented by various manufacturers, some vehicles reached speeds of 130 km/h, while others remained within 120 km/h. This difference in speed could lead to one vehicle colliding with the obstacle while another, traveling slower, might successfully avoid it, even if both initiated braking at the same point.
Another area of inconsistency was the following distance. Dongchedi reportedly set all vehicles to a “medium” following distance setting.
However, due to the varying detection strategies of different models, the actual following distances varied significantly. Some observers noted that when the lead vehicle cut out, some cars followed closely, separated by only three white lines, while others maintained a greater distance, up to seven white lines.
Undoubtedly, vehicles that maintain a larger following distance have more time for their assisted driving systems to process information and make decisions, thereby gaining a natural advantage in this particular test. This suggests that the test, while seemingly evaluating AEB, was more of a measure of which vehicle maintained a greater following distance.
Consequently, in the actual test, vehicles with more conservative driving strategies might have achieved better results in this segment.
Some have concluded that due to these uncontrolled variables, the test was merely a gimmick and lacked real significance.
Indeed, this test cannot accurately represent the true performance or safety of each manufacturer’s assisted driving systems, as it was not designed to be a comprehensive and universally applicable evaluation.
The resulting rankings derived from this test are therefore of questionable value. For example, while Tesla may show more “green” (passed) items, can one definitively claim that Tesla’s FSD, which is known to sometimes exhibit aggressive behavior like running red lights and disregarding traffic rules in urban environments, is the safest system overall?
However, from my perspective, while Dongchedi’s series of videos may not achieve perfect scientific rigor, the rather imprecise nature of this test does highlight a crucial point: in the face of extreme scenarios, no assisted driving system can offer 100% assurance of safety. This holds true even for systems lauded for their advanced capabilities, such as Huawei’s assisted driving technology.
The test conducted on real highways allowed for the activation of all assisted driving features. The simulated scenarios, such as construction zones and stalled trucks, are all plausible situations encountered in daily driving.
It is evident that no vehicle was able to perfectly pass all six tests, and the overall pass rate for any single scenario did not exceed 50%. Even for the vehicles that did pass, this success is not a guarantee of consistent performance; the same scenario repeated might yield a different outcome.
In previous years, domestic manufacturers engaged in overly aggressive marketing for their assisted driving technologies, leading many to believe that these systems had reached a very high level of capability. This even resulted in irresponsible behaviors, such as drivers falling asleep while on highways.
Often, as vehicles become increasingly adept at handling most driving situations, drivers gain a false sense of security, believing the systems are infallible and consequently lowering their focus on the road when assisted driving is engaged.
In reality, this perceived invincibility stems from not yet encountering those critical 0.01% “corner cases.”
Dongchedi’s test serves as a reminder that the systems currently in use are still primarily Level 2 assisted driving technology. While advancements are being made, their capabilities remain limited, particularly in extreme situations, where errors can easily occur.
I would like to advise drivers who rely heavily on assisted driving to prioritize safety over efficiency when engaging these systems. For instance, setting a lower speed limit and maintaining a greater following distance can provide more reaction time when faced with unusual or critical situations.
Ultimately, I reiterate my belief that the rankings derived from this Dongchedi test hold no inherent meaning. The test’s sole, albeit perhaps unintentional, conclusion is what automotive experts and relevant authorities have been emphasizing: assisted driving technology has its limits, and these limitations are significant.
Life is precious, and caution is always prudent. When entrusting the driving process entirely to a machine, consider whether you are prepared to accept the consequences of that 0.01% “corner case.”