HB (18)

Into the fourth dimension: The road to AV commercialisation 

 

Open Graph 1200x628 (16)

The large-scale deployment of autonomous vehicle fleets in real-world environments is a hugely complicated and delicate process. So, to simplify the conundrum, what is it that is most badly needed to help speed it up? A solution from Korea might just provide an answer to this burning question, as the man behind the invention, Dr. Jae-Eun Lee, explains to Intertraffic.

Large-scale deployment of autonomous vehicles (AVs) is not exclusively a “self-driving software” challenge. Essentially, it’s a complex, systems-engineering puzzle involving perception, safety validation, regulation, infrastructure, and economics.
AVs have to perceive the environment accurately in rain, fog, snow, tunnels, glare, dust, and night time conditions. Cameras and LiDAR can degrade significantly under poor visibility, creating gaps in detection reliability. Radar has historically been more robust in adverse weather, but conventional automotive radar lacked the spatial resolution that was needed for high-confidence object classification.

bitsensing, a radar solution company based in Gyeonggi-do, has just announced the release of its brand new AIR4D Imaging Radar, specifically engineered to help driverless vehicle practitioners deploy autonomous fleets in real-world environments faster and at scale. AIR4D Imaging Radar provides raw data outputs, unlike the ‘traditional’ or typical closed systems offered by other 4D radar solutions.  

The key differentiator is that in order to drastically speed up the commercialisation of AVs worldwide, AIR4D gives companies direct access to high-resolution 4D sensor data (point cloud data and Doppler data), including radar raw data outputs, to train smarter models. 

Access to the raw radar data is critical because it enables developers and AV companies to continuously refine perception models, validate performance and accelerate the path from testing to safe, large-scale fleet deployment.  

Access to the raw radar data is critical because it enables developers and AV companies to continuously refine perception models, validate performance and accelerate the path from testing to safe, large-scale fleet deployment

 

Open Graph 1200x628 (17)

Dr. Jae-Eun Lee, CEO of bitsensing, said of the launch of AIR4D:

“By delivering high-resolution 4D perception data including, importantly, all raw data outputs, our goal is to empower autonomous vehicle companies to build systems that at speed and at scale. Fusing cameras with 4D imaging radar is essential for autonomous vehicles and their commercialization, because camera-plus-radar architecture delivers enhanced accuracy in the detection and classification of objects.”  

Open Graph 1200x628 (18)

A 4D RADAR PURPOSE-BUILT FOR AVS  
AIR4D delivers detailed 4D sensor data designed specifically for AV AI models, while being optimized for power and heat efficiency, helping the vehicles operate reliably in the real world. By contrast, many 4D radars were developed for Advanced Driver Assistance Systems (ADAS) functions in passenger vehicles, and not specifically for full autonomous driving functionality.  

In addition, AIR4D Imaging Radar relies on a camera-plus-radar architecture for AVs, opening a viable path to significantly lower per-vehicle sensor costs, while accelerating AV deployment on roads around the world. 

AIR4D’s off-the-shelf deployment offers: 

•    Direct velocity per object: bitsensing’s radar measures how fast surrounding vehicles, cyclists or pedestrians are moving in real time, enabling faster and more accurate decision-making for AVs. 
•    Long-range detection up to 300m: It identifies vehicles and obstacles farther down the road, giving autonomous vehicles more time to react safely. 
•    Accuracy in night time and zero-light environments: The 4D radar can perform in near-total darkness (i.e. <0 lux) helping AVs maintain awareness at night or in poorly lit areas.  
•    Stable in harsh weather: AIR4D delivers strong sensing performance in rain, fog, snow and other challenging conditions that can reduce visibility for other sensors, as 4D radar millimetre-wave frequencies penetrate these adverse environmental barriers. With AV programs moving toward real-world commercial deployment, this reliability is no longer a nice-to-have; it is a baseline requirement. 
•    Deep integration with camera sensors: The solution works in combination with cameras, and its robust distance and velocity measurements complement the high-resolution imagery from cameras. This results in a comprehensive perception system that enhances the reliability of autonomous driving.  

 

Open Graph 1200x628 (19)

DEEPER UNDERSTANDING
“Cameras are a crucial basis for providing rich visual details to AVs, which include reading and interpreting signs on the road. The context they provide is indispensable. Algorithms have advanced significantly in recent years, meaning camera sensors can now make estimations on depth also. Meanwhile, the price points of these devices have become much more affordable,” says Dr Lee.

PULL QUOTE
Cameras are a crucial basis for providing rich visual details to AVs, which include reading and interpreting signs on the road. The context they provide is indispensable

“Yet cameras on their own face challenges, and this is especially true in poor weather conditions. Adverse conditions can really hamper a camera’s visibility. This is why the fusing of 4D imaging radar is so effective.”

“Radar technology relies on emitting electromagnetic waves, in this way, a radar is able to ‘see’ through any weather conditions,” he continues. “The key value-add particularly from 4D radar compared to previous iterations of the technology (i.e. 3D) is that it provides elevation data, providing an incredibly detailed, real-time picture to an AV of its environment. It provides detailed point cloud data and Doppler data that provides deep contextual understanding of surroundings, that doesn’t rely purely on visual details like cameras.”

Open Graph 1200x628 (20)


A CLEARER PICTURE
Until now sensors had been relied upon to provide 3D spatial accuracy for AVs, which 3D radars could not replicate, such as distinguishing a pedestrian from a vehicle, or a road sign from an obstacle. However, 4D imaging radar solves this blind spot by adding elevation data. What this means is that AVs equipped with 4D radar get a high-resolution, real-time spatial picture of their environment across all four dimensions, which is the level of perception fidelity that safe autonomous driving demands.

Dr Lee concludes:

“The benefit of a camera-plus-radar architecture is to create a more sustainable and scalable approach to commercialising full autonomous driving. Yet crucially, given the breadth of the data and information generated by this type of architecture, it has safety at the heart of it, helping to foster trust in AVs amongst all stakeholders.”

This is why many AV developers are now seeing 4D radar as essential for scalable Level 3–Level 4 autonomy, rather than merely a supplemental sensor.

AVs equipped with 4D radar get a high-resolution, real-time spatial picture of their environment across all four dimensions, which is the level of perception fidelity that safe autonomous driving demands.

Open Graph 1200x628 (21)



 

Share your story

Do you have an innovation, research results or an other interesting topic you would like to share with the professionals in the infrastructure, traffic management, safety, smart mobility and parking industry? The Intertraffic website and social media channels are a great platform to showcase your stories!

Please contact our Sr Brand Marketing Manager Carola Jansen-Young.

Are you an Intertraffic exhibitor?

Make sure you add your latest press releases to your Company Profile in the Exhibitor Portal for free exposure.


Get up to speed on the mobility industry - our newsletter straight to your inbox!