How to Add a Custom Plugin and Interpreter in LidarView

I’m trying to add support for a new LiDAR model in LidarView, but currently, the software doesn’t support this sensor natively. I’d like to implement a custom plugin, similar to how the Livox plugin is integrated.

Here are my main questions:

  • Plugin Integration:
  • What are the steps to follow when adding a new plugin in LidarView?
  • Is there any official documentation or developer guide that explains how to properly implement and register a new plugin?
  • Sensor Configuration UI:
  • How do I add a new interpreter and sensor model name to the sensor configuration UI in LidarView?
  • Where and how are the options for supported sensors populated in the UI?
  • Livox SDK Integration:
  • How is the Livox SDK integrated within LidarView?
  • Are there any example files, plugin templates, or interface patterns I can follow to replicate the integration process?

My plan is to follow a similar path as the Livox plugin — using a vendor SDK to interpret packets and convert them to usable data inside LidarView. Any help would be much appreciated!

Thanks in advance.

@Timothee_Couble

Hello @Silver_11,

We don’t have any documentation to implement a new lidar interpreter in LidarView yet, but I can describe the required steps here:

  • You will first need to implement an interpreter plugin. We have two working exemple in LidarView repository where we tried to be as simple as possible in the implementation, a time based lidar interpreter where frame splitting is based on a time delay (eg Livox LiDARs) or a classic rotative lidar interpreter (eg Velodyne or Hesai LiDARs) where the frame splitting is based on azimuth.
  • The UI is generated with the paraview proxy system using .xml files. It is a powerfull tool but it can be a bit complex and overwhelming at first. In the example readme you will have a description for each file and what are their purpose.
  • The Livox plugin implementation code is available here. Note that we didn’t directly use the livox sdk as it was easier for us to just reimplement it using Livox documentation.
  • If you want to contribute to LidarView open source code by adding this new plugin. I can help you set up the repository and add it as a submodule in LidarView.
  • To build the plugin you can either add it in LidarView Plugins/ directory next to other interpreters or you can choose to build it externally with a find_package(LidarView) and using ParaView macro CMake code to build the plugin, as done here.
  • Testing functions are also available. Example of use available here for livox. It will check that your interpreter point information are coherent, and that it is compatible with our SLAM algorithm.

Let me know if you have more questions.
Timothée

Hi @Timothee_Couble
Thank you for the reply.

To add a new LiDAR plugin in LidarView, do I only need to place it under lidarview/src/plugins, or do I also need to understand and modify the LVCore modules like IO/Lidar and IO/Network? These modules contain files such as vtkLidarReader, vtkLidarStreamer, and vtkUDPReceiver, which handle live sensor data streaming and reading from PCAP files.

Also, how does LidarView determine the port number, sensor vendor, and model in order to decide which plugin should be used?

You need to implement the plugin under lidarview/src/Plugins. In the plugin, than you can use the vtkLidarReader and vtkLidarStream in the proxy files (.xml) to implement your own reader / stream, with the corresponding interpreter. Please see this readme for more detail about the stucture of one plugin.

The port, sensor and models are set directly by the user (you can set default values in the .xml files) in the properties dock in advance mode or the configuration dialog in LidarView when you click on open pcap:


I am trying to run SLAM in LidarView using a custom plugin that I added for my LiDAR.

The LiDAR packets do not contain intensity or timestamp information, so I am assigning the timestamp using the system time. The LiDAR is a 4-channel device and does not provide IMU data.

When I try to run SLAM (online, on live streaming data), I keep getting the following warnings repeatedly:

[WARNING] Unable to extrapolate scan pose from previous motion : extrapolation time is too far.
[WARNING] Not enough usable logged states, cannot undistort
[WARNING] Pointcloud is empty, voxel grid not updated.
[WARNING] Moving objects constraint was too strong, removing constraint

Because of these warnings, SLAM is not working properly.

I would like to know:

  1. Is there any documentation on how to properly configure SLAM in LidarView (especially for live streaming data without IMU)?

  2. How should I configure the TF tree in this case?

Any help would be much appreciated!

Thanks in advance.

Hi @Silver_11,

Would it be possible to share with us a frame of you sensor? (ctrl+s in stream mode → save in vtp)

Also more logs from slam could help us figuring what’s wrong. Could you change the verbosity of the SLAM to the level 3?

For the TF tree:

  • LidarView does not handle directly a tree, usually we set the LiDAR transform to base coordinate system using the transform in the lidar reader / stream.
  • You could also change the BASE to Lidar transform SLAM parameters, as it considers baseToLidar as identity by default. The trajectory is represented in lidarFrame.

lidar_livestream.vtp (75.6 KB)

output logs:
#########################################################
Processing frame 369 (at time 1638.817721000)
#########################################################

========== Keypoints extraction ==========
Extracted features : 15 edges 0 intensity_edges 346 planes
→ Keypoints extraction took : 0.406 ms (average : 0.320 ms)
========== Ego-Motion ==========
[WARNING] Pointcloud is empty, voxel grid not updated.
Keypoints extracted from previous frame : 13 edges 0 intensity_edges 222 planes
→ EgoMotion : build KD tree took : 0.144 ms (average : 0.093 ms)
→ Pose estimation : ICP took : 0.375 ms (average : 0.145 ms)
Ceres Solver Report: Iterations: 5, Initial cost: 7.869208e-01, Final cost: 6.739072e-01, Termination: CONVERGENCE
→ Pose estimation : LM optim took : 0.903 ms (average : 0.449 ms)
→ Pose estimation : ICP took : 0.167 ms (average : 0.145 ms)
Ceres Solver Report: Iterations: 3, Initial cost: 7.248729e-01, Final cost: 7.229931e-01, Termination: CONVERGENCE
→ Pose estimation : LM optim took : 0.434 ms (average : 0.449 ms)
→ Pose estimation : ICP took : 0.158 ms (average : 0.145 ms)
Ceres Solver Report: Iterations: 4, Initial cost: 6.814335e-01, Final cost: 6.808173e-01, Termination: CONVERGENCE
→ Pose estimation : LM optim took : 0.508 ms (average : 0.449 ms)
→ Pose estimation : ICP took : 0.162 ms (average : 0.145 ms)
Ceres Solver Report: Iterations: 5, Initial cost: 4.470499e-01, Final cost: 4.346383e-01, Termination: CONVERGENCE
→ Pose estimation : LM optim took : 0.636 ms (average : 0.449 ms)
→ Ego-Motion : whole ICP-LM loop took : 3.803 ms (average : 1.930 ms)
Matched keypoints: 336 (11 edges 0 intensity_edges 325 planes )
Estimated Ego-Motion (motion since last frame):
translation = [-0.007 0.006 0.136] m
rotation = [-1.354 4.811 0.424] °
→ Ego-Motion took : 3.996 ms (average : 2.063 ms)
========== Localization ==========
Undistortion performed using SLAM poses interpolation
Undistortion performed using SLAM poses interpolation
Undistortion performed using SLAM poses interpolation
→ Localization : initial undistortion took : 0.117 ms (average : 0.075 ms)
Keypoints extracted from map : 1 edges 0 intensity_edges 4 planes
→ Localization : map keypoints extraction took : 0.006 ms (average : 0.011 ms)
[ERROR] Not enough keypoints matched, Pose estimation skipped for this frame.
→ Localization : whole ICP-LM loop took : 0.062 ms (average : 0.772 ms)
Matched keypoints: 0 (0 edges 0 intensity_edges 0 planes )
Position uncertainty = 0.000 m (along [0.000 0.000 0.000])
Orientation uncertainty = 0.000 ° (along [0.000 0.000 0.000])
→ Localization took : 0.213 ms (average : 0.890 ms)
→ Confidence estimators computation took : 0.001 ms (average : 0.001 ms)
→ Logging took : 0.022 ms (average : 0.018 ms)
========== SLAM results ==========
Ego-Motion:
translation = [-0.007 0.006 0.136] m
rotation = [-1.354 4.811 0.424] °
Localization:
position = [6.333 6.690 0.554] m
orientation = [157.494 -22.307 -25.508] °
→ SLAM frame processing took : 4.695 ms (average : 3.373 ms)
Undistortion performed using SLAM poses interpolation
→ vtkSlam : basic output conversions took : 0.128 ms (average : 0.168 ms)
→ vtkSlam : output keypoints maps took : 0.009 ms (average : 0.019 ms)
→ vtkSlam took : 5.110 ms (average : 3.867 ms)
→ vtkSlam : input conversions took : 0.150 ms (average : 0.165 ms)

According to logs & the data, the process seems to work (no slam errors). If the result are not good enough, it could that the lidar field of view is very narrow and the slam cannot collect enough keypoints. (15 edges 0 intensity_edges 346 planes)

You could check key point in the slam output (current edge keypoints, current planar keypoints) by checking Output current keypoints parameter.

Here we are using only the lidar sensor (4 channels without imu data) without any base.. Can we create the map and navigate on it..

In theory, this shall be possible, but we haven’t not yet tested with this sensor / amount of lasers. This may require updates in the SLAM to make it work properly.
Having in mind possible limitations if the field of view does not contain relevant features to allow the localization.
Also, it seems the timestamps are correlated with laser ids. That does not seem physically coherent.

If you want us to have a deeper look and see how we can adapt the SLAM to this input ( or see how the inputs need to be adapted), we propose support contracts.

Another point, let us know if you would like to contribute this new interpreter to LidarView.

Thanks,

Gatien

I am using a LiDAR with 4 channels, ~100° horizontal FOV, and 0.25° angular resolution.

I would like to know whether there are inherent limitations for running SLAM on live-stream data with this sensor configuration.

  • Can the relatively narrow horizontal FOV and limited number of channels affect the ability of SLAM to extract enough keypoints for stable mapping?

  • With this LiDAR, is it realistic to expect to build a reliable map in real time, or are there constraints (e.g., feature sparsity, drift, need for additional sensors like IMU/odometry)?

That will depend mainly on the conditions where you will use it.
Here would be the main limitations :

  • As you have a limited vertical FoV, you would need to have either planar motion or a vertical motion limited in its speed ( to keep an decent overlap between the scans and the built maps ).
  • The limited Horizontal FoV means that the SLAM could get lost in case it goes too close to structures and/or it sees surfaces with low 3D variations ( typically a flat wall )

The real time aspect should not be a problem in your case considering the density of points.

0.25° resolution is quite standard in the rotating LiDARs

In your extracted data, I think the timestamps are not correctly assigned : you have 1 value per laser, which does not make sense in the case of a rotating sensor

I have done SLAM in LidarView using my custom plugin inside a room. This is the result I got (screenshot attached).

The trajectory is forming, but it is not very smooth. Initially, the SLAM path was jumping even when the sensor was stationary untill i gave environment = indoor option and whenever there is object movement in the scene, the trajectory starts jumping.

I am planning to integrate this with odometry data from my robot’s wheel encoders to improve the trajectory quality.i already have a ros2 node which will publish the odom data.

My questions:

  1. Can I integrate odometry data into the SLAM pipeline in LidarView?

  2. If yes, does LidarView support live odometry streaming (while the robot moves), or is it only possible through pre-recorded files?

  3. Is there an existing way/plugin to add odometry data, or do I need to extend the pipeline myself?

Hi @Silver_11,

  1. The SLAM, in LidarView, currenlty support odometry though a pre-recorded csv file (see instruction in the How to SLAM with LidarView doc)
  2. We are currently trying to make that functionality compatible with a pipeline directly in LidarView. It would also bring the live odometry streaming feature you are referring.
  3. We don’t have any plugin that get odometry data (as it would depends of the data received). You’ll need to create a plugin (or change the plugin you are currently using) to implement a LiveSource that add the odometry data in a vtkTable. We also plan to update our Ros2IO plugin to support reading ros2 nodes.

Note that we also have a ros2 wrapping of our SLAM library that can be used to read odometry topic, but you’ll need to use rviz for visualization.

I would like to extend my plugin so that SLAM in LidarView can subscribe to odom data & use this live odometry stream as a motion prior, instead of (or in addition to) the CSV-based odometry input mentioned in the “How to SLAM with LidarView” documentation.

My questions are:

  1. Can I extend my custom plugin to subscribe directly to a ROS 2 /odom topic?

  2. If yes, in what format does the SLAM pipeline in LidarView expect odometry data (pose, transform, velocities)?

  3. Is there an API or example of how to pass odometry data from the plugin into the SLAM pipeline (instead of reading from CSV)?

  4. If live ROS 2 subscription is not yet supported, is there a recommended way to convert my odometry into the proper SLAM input format so I can test with my custom plugin?

  1. Theorically yes, but we don’t any example code to do it yet.
  2. The slam expect a vtkTable with two columns: Time and odom which is the distance in meters.
  3. As said earlier this not yet merged in the SLAM yet. You can follow the MR progress here
  4. You have to create a new ros node, to get the odometry info and save it as a csv or convert it to a vtk format directly.