DBSCAN Clustering for LiDAR
🚗

DBSCAN Clustering for LiDAR

Tags
Java
Python
Autonomous Vehicles
Open3D
Pandas
Published
November 17, 2022

DBSCAN Clustering for LiDAR

Overview

This project implements the DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm, a renowned data clustering method. It's particularly designed to work with LiDAR point clouds, making it highly relevant for applications in autonomous vehicle technology. The DBSCAN algorithm is adept at identifying clusters of varying shapes and sizes in data, and it's particularly useful for handling noise in spatial datasets.

Features

  • Custom Implementation of DBSCAN: Fully implemented in Java, adhering to the original algorithm's specifications. I'm primarily using Albin Aronsson & Adam Eriksson research paper as a reference.
  • LiDAR Point Cloud Processing: Tailored to process 3D point cloud data from LiDAR sensors.
  • Flexible Clustering Parameters: Users can specify eps (epsilon) and minPts (minimum points), the two key parameters of the DBSCAN algorithm.
  • Noise Identification: Effectively marks isolated points in low-density regions as outliers.
  • Cluster Output: The program generates CSV files with cluster labels and RGB color values for each point, enabling detailed analysis of the clustering results.
  • 3D Visualization: Accompanying the project is a Python visualization script that can read the output CSV files and render the clustered point clouds in 3D. This visualizer provides an intuitive understanding of the clustering output, highlighting the effectiveness of the DBSCAN algorithm in processing LiDAR data.

How It Works

The DBSCAN algorithm operates by locating clusters of points in high-density areas and designating points in low-density regions as noise. It uses two parameters:
  • minPts: The minimum number of points required to form a dense region.
  • eps: The maximum distance between two points for one to be considered in the neighbourhood of the other.

Considerations

This project is intended as a proof of concept, primarily for exploration and interest in spatial data clustering. The project was exploratory and in a larger context is a simple model aimed at understanding the potential of DBSCAN in real-world scenarios. Things to consider:
  • LiDAR Cost: The high cost of LiDAR technology is a significant consideration for practical applications.
  • Computer Vision: Computer vision offer a cost-effective alternative for spatial analysis and delves deeper into data interpretation, enhancing our understanding of our dynamic world. Advanced algorithms in computer vision can dissect complex scenes, capturing nuances of movement and depth with increasing precision.
  • Hybrid Systems: Combining LiDAR with computer vision could yield better accuracy and cost-efficiency.
 

Theory

Autonomous vehicles of the future are typically equipped with a multitude of sensors in order to capture information about the surrounding scene and thus being able to autonomously navigate. One of these sensors is the Laser Scanner or LiDAR (Light Detection And Ranging). Using a LiDAR, the vehicle can scan the scene in front by sweeping few laser beams (typically between 8 to 64 lasers).
notion image
Each time a laser beam hit an object, the laser light bounce back to the LiDAR from which a precise distance can be estimated. A complete scan of the scene with these lasers will therefore generate a large set of 3D points (also called point cloud) that correspond to the structure of the scene. The figure on the next pageshows a typical point cloud captured by a car equipped with a LiDAR; note that to facilitate the visualization, the points are color-coded based on their height with respect to the ground. A view of the same scene captured by a camera is also shown.
 

Data

The dataset features panoramic images of urban settings, which are processed to produce XYZ files representing individual pixels of the images, reflecting their spatial positioning and allowing for a detailed three-dimensional understanding of the scene. These files form the core of our analysis, enabling the DBSCAN algorithm to identify and categorize clusters within the data, corresponding to various objects and elements within the urban landscape.
notion image
notion image
notion image
x,y,z -5.057771786205823,-4.132708931956775,0.2428091883181846 -5.0355177525576,-4.088825974461278,0.241137471769056 -5.125807925277937,-4.136111826946676,0.2448524059120176 -5.079222136014556,-4.072855314440647,0.2420290529642492 ...
 

Clustering Algorithm

public class DBScan { public void findCluster() { // Reset # of clusters and instantiate the NearestNeighnors object: numberOfClusters = 0; NearestNeighbors neighborObject = new NearestNeighbors(this.points); // Run through the entire list of points for (Point3D point : points) { if (point.getLabel() != 0) { // Already processed, skip continue; } // Find nearby neighbors List<Point3D> N = neighborObject.rangeQuery(this.eps, point); // Density check if (N.size() < minPts) { point.setLabel(-1); continue; } numberOfClusters++; point.setLabel(numberOfClusters); // Set label as cluster number // Push neighbors values for (int i = 0; i < N.size(); i++) { outputStack.push(N.get(i)); } // Looping through neighbours list while (outputStack.size() > 0) { Point3D Q = outputStack.pop(); if (Q.getLabel() == -1) { // Set label of GPSCoord in neighbours list to cluster number Q.setLabel(numberOfClusters); } // If label is not 0 (already processed), skip if (Q.getLabel() != 0) { continue; } Q.setLabel(numberOfClusters); N = neighborObject.rangeQuery(this.eps, Q); // Density check (core point): if (N.size() >= minPts) { for (int i = 0; i < N.size(); i++) { // Add to pre-existing neighbours list outputStack.push(N.get(i)); } } } } } }
 

Data Visualization

import pandas as pd import open3d as o3d import os import numpy as np #Get raw point cloud: df_Raw_Point_Cloud = pd.read_csv("data/Point_Cloud_2.csv") #Full point cloud: pcd = o3d.geometry.PointCloud() points = np.vstack((df_Raw_Point_Cloud.x, df_Raw_Point_Cloud.y, df_Raw_Point_Cloud.z)).transpose() pcd.points = o3d.utility.Vector3dVector(points) o3d.visualization.draw_geometries([pc
 
notion image
notion image
notion image
 

Update: Algorithm Improvements

Enhanced Nearest Neighbor Search

The DBSCAN clustering algorithm has been augmented with a new NearestNeighbors class that includes a constructor accepting a list of Point3D objects and an optimized rangeQuery method. This method is pivotal for finding the nearest neighbors of a point within a specified radius (epsilon, ε). The improvements utilize the computational geometry data structure known as the k-d tree (k-dimensional tree) to achieve a more efficient neighborhood search.
 
k-d Trees: A specialized binary search tree designed to manage spatial data, facilitating rapid search operations. Unlike regular binary search trees that compare nodes based on a single key, k-d trees organize points in a k-dimensional space, splitting the dataset by alternating across dimensions at each level of the tree. This multidimensional division enables efficient querying and retrieval of spatially proximate data points, enhancing the performance of operations such as nearest neighbor searches
 
Below is a simple example of inserting various points. On the left, is a spatial partitioning and a the right is a simple 2-dimensional k-d tree. The first division is along the x-coordinate at 30, splitting points into two groups. Subsequent divisions alternate between axes, with the next cut at y-coordinate 25. The right side depicts the tree structure, where each node represents a point, and each level alternates between x and y divisions, corresponding to the splits shown on the left.
notion image
 
 

Efficient Spatial Indexing with k-d Trees

The k-d tree is an abstraction that partitions the point set, creating a binary tree representation that allows for fast querying of spatial data. This binary tree structure has two key phases in its lifecycle:
  1. Building the k-d tree: By inserting all points into the tree, we create a structured, navigable index that significantly reduces the complexity of searching for neighboring points. The tree alternates between the k dimensions at each level, ensuring that the partitioning is adapted to the data's inherent spatial characteristics.
  1. Querying for neighbors: The rangeQuery method has been redesigned to utilize the k-d tree, taking advantage of its depth to achieve a logarithmic time complexity for neighbor searches. When searching for the neighbors of a point, the algorithm efficiently traverses the tree, pruning search paths that fall outside the query point's epsilon-radius.
 
The use of a k-d tree transforms the neighborhood search from an O(N) problem to an O(log N) problem, resulting in substantial performance gains, particularly noticeable in larger datasets. The diagrams included in the documentation visually represent the k-d tree's structure and the associated partitioning of the 2D and 3D space, illustrating the optimized search process.
 
public class NearestNeighborsKD { public KDTree kdTree; List<Point3D> N = new ArrayList<Point3D>(); // construct with list of points public NearestNeighborsKD(List<Point3D> list) { this.kdTree = new KDTree(); for (Point3D p : list) { kdTree.add(p); } } // gets the neighbors of p (at a distance less than eps) public List<Point3D> rangeQuery(Point3D pt, double eps) { rangeQuery(pt, eps, N, this.kdTree.getRoot()); return N; } private void rangeQuery(Point3D pt, double eps, List<Point3D> N, KDTree.KDnode node) { if (node == null) { return; } if (pt.distance(node.point) < eps) { N.add(node.point); } if (pt.get(node.axis) - eps <= node.value) { rangeQuery(pt, eps, N, node.left); } if (pt.get(node.axis) + eps > node.value) { rangeQuery(pt, eps, N, node.right); } return; } }

Installation

Clone the repository to your local machine and run the following:
$ java DBScan {datafile}.csv 1.2 10 $ python3 view_point_cluster.py
 
 
Check out the project on GitHub:
Clustering-for-LiDAR
Deniz-JasaUpdated Dec 12, 2023