Foreword
This report combs through new products and technology trends in four major sectors - intelligent cockpit, intelligent driving, body & chassis, and energy & powertrain - in 2025 and Q1 2026. It summarizes representative emerging technologies and innovative applications and extracts hundreds of industry characteristics.New sensor design stands out as one of the industry highlights during 2025-2026. Innovative applications abound in LiDAR, radar and cameras, as well as auditory, gas and other sensors.
LiDAR Sector:
Huawei, Hesai, and RoboSense among others have launched multi-channel LiDARs to meet L3/L4 autonomous driving requirements.Leishen Intelligent System’s fiber-optic LiDAR adopts 1550nm fiber laser, with a maximum detection range of 1500 meters and precision of ±5cm.
Huawei’s phased-array LiDAR supports seamless switching between multiple bands and real-time tracking of complex road conditions, improving detection accuracy by 30%.
Fortsense’s all solid-state optical scanning technology has entered the productization phase, boosting light utilization efficiency from the industry average of ~10% to over 80%.
Radar Sector:
4D radar remains a hot spot. sinPro, Starleading, Aptiv, Mobileye and others have launched 4D radar products, extending detection range to 300-400 meters and enhancing 3D perception, penetration, contour profiling and static small-target detection capabilities.5D radar delivers more accurate and stable recognition in target tracking and classification, solving sore points of 4D imaging radar in intelligent driving applications, e.g., misclassifying slow vehicles as stationary targets, falsely identifying large trucks as multiple vehicles, and missing pedestrians crossing roads. MuNiu Technology and Ireland’s Provizio adds a “micro-motion” dimension to 4D radar to enable 5D radar applications.
Camera Sector:
Inspired by biological vision systems, bionic camera achieves wider field of view and deeper visual perception. Institutions including Korea Advanced Institute of Science and Technology (KAIST) and Institute of Science Tokyo focus on developing it.Three-layer stacked CMOS LOFIC technology, terahertz vision sensors, infrared thermal imaging systems, vision-LiDAR fusion sensors and so on improve dynamic range, readout speed, detection range and accuracy of vision systems through novel structural designs.
In addition, auditory sensors and gas/particle sensors monitor sound, gases (e.g., CO₂/CO) or particles, enabling multi-dimensional data collection and enhancing functions such as intelligent driving, vehicle fault warning, child presence detection and in-cabin air quality monitoring.
I. Japanese and South Korean Research Institutions Continue R&D of Bionic Cameras.
Bionic cameras mimic biological vision systems to achieve wider FOV or deeper perception. Advanced vision systems are expected to be applied in autonomous driving, drones, robotics and other fields to improve image accuracy. During 2025-2026, Japanese and South Korean research institutions continue R&D of bionic cameras.
1 KAIST Develops Insect Compound-Eye Bionic Camera.
In 2025, the Korea Advanced Institute of Science and Technology (KAIST) announced a new bionic camera based on the insect compound-eye structure, applicable to high-speed motion capture, security surveillance, mobile device cameras and other fields.
Performance Features:
- Ultra-high frame rate: 9120 frames per second
- Excellent low-light imaging: capture objects up to 40 times dimmer than those detectable by conventional high-speed cameras
- Ultra-slim profile: thickness of < 1mm, easy to integrate into various systems
Technical Features:
Employs a compound-eye-like structure that allows for the parallel acquisition of frames from different time intervals.Uses multiple optical channels and temporal summation to boost signal-to-noise ratio by accumulating light over overlapping periods.
Introduces a "channel-splitting" technique, achieving frame rates thousands of times faster than those supported by the image sensors used in packaging.
Applies “compressive image reconstruction” algorithm to eliminate blur caused by frame integration and reconstruct sharp images.
2. The Institute of Science Tokyo Develops Bionic Wind Sensing Technology.
In 2025, a research team at Institute of Science Tokyo, inspired by insect antenna wind-sensing mechanism, developed bionic wind sensing technology. The technology mimics insect receptors, converting airflow pressure changes into electrical signals to calculate wind direction and speed, and enhances performance using multi-segment antenna-like structures. It features high sensitivity, small size and low power consumption, suitable for meteorological monitoring, drone flight and other applications.
High-precision wind detection: uses micro strain sensors and a convolutional neural network (CNN), achieving up to 99.5% wind direction accuracy, and an 85.2% accuracy even with short data length (0.2 flapping cycles).
Multi-sensor collaboration: multiple strain sensors (e.g., 7 strain gauges) are installed on the biomimetic flexible wing. Through multi-point strain data acquisition and machine learning algorithms, the accuracy and stability of wind direction classification are significantly improved.
Lightweight & low-cost: traditional flow sensing devices are difficult to apply to small aerial robots due to weight and size limitations. This technology utilizes low-cost commercial components (such as strain gauges) and simple wing strain sensing to achieve efficient wind direction classification without the need for additional specialized equipment.
II. Sony and Teradar among Others Launch New Sensors Such as Three-Layer Stacked CMOS Image Sensor and Terahertz Vision Sensor.
1. Sony Develops New Three-Layer Stacked CMOS Image Sensor.
The stacked CMOS image sensor currently used by Sony has a two-layer structure: the upper layer is a photosensitive pixel array (photodiode layer), and the lower layer is a logic circuit layer (responsible for image processing).
Sony also has long been committed to adding a third layer to stacked CMOS image sensors, aiming to further improve performance in dynamic range, sensitivity, noise control, readout efficiency, speed and resolution, especially in video performance, breaking through the current processing bottleneck of high-resolution video recording.
2. The World's First Terahertz Vision Sensor Debuts, Bridging the Gap Between Radar and LiDAR.
At CES 2026, U.S.-based Teradar unveiled its flagship Terahertz vision sensor: Teradar Summit™. It is the world’s first long-range, high-resolution sensor designed for high performance in any type of weather, filling a critical gap left by legacy radar and lidar sensors.
Features of Teradar Summit Terahertz Vision Sensor:
- Architecture: Solid state digital phased array
- Range: 300m
- Native Resolution: 0.13°
- Point Cloud: 3D + Doppler
- 4D Measurement: Range, Azimuth, Elevation, and relative velocity
- Autonomous Vehicle Compatibility: L2 - L5
- Weather Performance: Day, Night, Fog, Rain, Snow, Sleet, Dust
Benefits of Tapping the Terahertz Band:
Terahertz waves lie between the electromagnetic spectrum used by radar (microwave) and lidar (infrared). Their unique wavelength characteristics give them high resolution and good penetration under specific conditions (such as dry air, short distances, and non-polar obstructions).Teradar's Modular Terahertz Engine (MTE) is an all-solid-state sensor platform built on proprietary transmit (TX), receive (RX), and core processing chips, which deliver crystal-clear vision, detect small objects at great distances, and maintain uncompromised reliability in any environment - day or night, in rain, fog and snow.
The Summit Terahertz Vision Sensor will be priced between radar and lidar, expected to be a few hundred US dollars, offering a price advantage.
Summit's unique ability to deliver reliable, high quality data to AD/ADAS has attracted Tier1s and automotive OEMs around the world. Currently in eight development partnerships across the U.S. and Germany, Teradar will begin bidding on high volume production programs in 2026, targeting start of production (SOP) in 2028.
III. Kyocera and Fuyao among Others Launch Vision-LiDAR Fusion Sensors.
1. Kyocera Unveils “Camera-LiDAR” Fusion Sensor.
In 2025, Kyocera launched the world’s first “camera-LiDAR” fusion sensor. The sensor achieves zero-parallax real-time data integration via optical axis alignment. Featuring high resolution and durability, it is applicable to autonomous driving, robot navigation, smart security and other fields.
Features:
High resolution (world's highest laser irradiance density: 0.045°): With an irradiance density of 0.045°, it utilizes the Company’s proprietary laser scan unit technology from MFPs and printers, making it possible to detect a 30 cm falling object at a distance of 100 m.High durability with proprietary MEMS mirror: A proprietary MEMS mirror, developed with Kyocera’s advanced manufacturing and ceramic package technologies, and high-resolution laser scanning technology, support high-precision sensing for various industries including autonomous vehicles, marine/ships, heavy machinery, and more.
Support for customized solutions: Each element is developed and manufactured by Kyocera for total control and customization, from MEMS mirrors to optical systems, electrical circuits, and software.
2. Fuyao In-Cabin Laser-Vision Fusion Solution Debuts on New AITO M7.
In September 2025, Fuyao’s in-cabin laser-vision fusion solution made a debut on the new AITO M7.
Fuyao’s “Fused Intelligent Driving Front Windshield” deeply integrates LiDAR and camera sensors into the front windshield glass, centered on “in-cabin integration”. It solves industry challenges of LiDAR signal attenuation caused by curved glass via innovative materials and precision processes, achieving high transmittance of near-infrared light and delivering a simpler, more stable and reliable perception solution for intelligent driving systems.
IV. Fraunhofer IDMT Launches Auditory Sensors to Complete Vehicle Perception System.
Current mainstream autonomous vehicles relying on pure vision or vision-radar fusion generally lack recognition of critical external sound events (sirens, bicycle bells, etc.), creating perception blind spots. Equipping vehicles with “hearing” using acoustic sensors and AI algorithms to address this gap has become a clear industry evolution path and entered the prototype development and testing phase.
In September 2025, the Fraunhofer Institute for Digital Media Technology (IDMT) launched the “Hearing Car” project, integrating microphone arrays and AI to complement key capabilities missing from traditional perception systems.
Technical Composition:
Hardware: high-sensitivity microphone arrays integrated into the vehicle body or windshield.Software: AI algorithms classify and recognize specific sounds (ambulance sirens, bicycle bells, children’s shouts, etc.) and link to vehicle control systems.
Interactive display: the windshield shows warnings (e.g., “ACHTUNG! SIRENE ERKANNT” in German, namely, “WARNING! SIREN DETECTED”).
Application Scenarios:
Blind-spot detection: detects sounds (e.g., bicycles and children from alleys) in visual blind spots.Emergency response: automatically yields to emergency vehicles (ambulances), adjusts path or pulls over.
Human-machine collaboration: serve as an advanced safety assistance function (e.g., alerting the human driver) in the non-fully automated driving stage.

