Menu
/ / / / Getting a Handle on LiDAR

Getting a Handle on LiDAR

Much as is the trend with many other sensing technologies, the volume of data coming off light detection and ranging (LiDAR) sensors available for analysis, visualization and dissemination is growing by leaps and bounds. Although LiDAR sensors have always generated plenty of information, the growth in demand for LiDAR data, along with advances in sensors and collection technologies, has created masses of data that just keep on growing.

As a result, the software that is required to process and analyze this data has had to keep pace. Companies providing such software have employed the multicore processing techniques now becoming common when tackling big data sets, and have also added features and functions to satisfy the escalating demands of users.

Three-dimensional visualizations of LiDAR data and the fusion of LiDAR and imagery are two of the key features that have begun to emerge in a major way in recent months.

“The volume of LiDAR data has grown massively,” said Matt Morris, director of geospatial product development at Overwatch, a strategic business of Textron Systems Advanced Systems, an operating unit of Textron Systems, a Textron Inc. company. “Sensors are able to collect data on much larger areas in a much shorter period of time. The density of the data has also grown. When we first released our LiDAR analysis software, the resolution was 1 meter. Now it is down to centimeters.”

“We’ve gone from zero to 60 in a very short time,” said Patrick Cunningham, president of Blue Marble Geographics. “This has overwhelmed most LiDAR hardware and software systems in their ability to process these large volumes of data.”

“Not only is there more LiDAR data being collected than in the past, but it is getting into the hands of a wider variety of users who are not necessarily LiDAR experts,” said Jennifer Stefanacci, director of product management at Exelis VIS. “This creates the need for easy-to-use tools. New types of data formats are also being utilized. As we build new software tools we are always thinking about how to support that data.”

“The data question can be broken down two different ways,” said Nicholas Rosengarten, a product manager at BAE Systems. “One has to do with the wealth of data. It has become a lot more accessible and there is a lot more of it. You have to make sense of the data sets in order to answer intelligence questions.”

“Within the past year or so there have been major changes in LiDAR hardware and how the data is collected,” said Matt Bethel, manager of systems engineering at Merrick & Company. “Specifically, multiple look angles allow for better feature definition. Higher LiDAR pulse rates provide higher resolution and greater data density.” Higher pulse rates return denser data and provide more data fidelity and detail. Multiple looks involve generating more than one LiDAR pulse directed toward the same spot before the initial pulse is returned and detected.

“These are huge advancements in foliage penetration and better feature definition,” said Bethel. “These developments allow LiDAR tools to be used more effectively and have brought significant changes to airborne LiDAR mapping. Higher pulse rates enable more data to be collected in a single pass and allow flight at higher and safer altitudes.”

“The ability to interpret an unstructured LiDAR point cloud into a solid, real-world representation continues to evolve,” said Sandra Vaquerizo, vice president, CG2 Inc. “By leveraging an oblique angle source, analysts can capture all of the rich detail along the sides of features that can be scanned from lower altitudes or longer look angles. The result is a highly accurate, complete scene that can be viewed from any angle.”

Computational Challenge

The greater volumes of data being generated by LiDAR sensors represent a double-edged sward. On one hand, the data density and resolution allow LiDAR analysis software to do its job better. But the challenge of processing the larger data sets requires new computational approaches.

“More data allows the software to perform at higher success rates for automated feature extraction and fuller classification and synchronization of data,” said Bethel. “As long as the LiDAR scans are accurately calibrated and positioned, the data coming from the multiple look angles is no hindrance. It is only a benefit.” There have been advances in the last two years in the accuracy of LiDAR data calibration, which improves data quality by correcting misalignments resulting from the positioning of the sensor on the aircraft.

In order to deal with the larger data sets, distributed processing schemes have been brought to bear, much as they have for other big-data problems. “Feature extraction especially does not lend itself to computation on just one machine,” said Morris. “Distributed processing in the cloud chops the process into bits, with each part being processed individually in a node and then merged back together. Something that could take two hours on one machine can be done in five minutes on a cluster of 10 work stations.”

Overwatch recently released its new version of its LiDAR Analyst product. “Our history has been to focus on feature extraction,” said Morris. “LiDAR Analyst automates the extraction of terrain, buildings and trees. The latest version allows users to obtain a full 3-D visualization of buildings and vegetation to perform line-of-sight and buffer-zone analyses. The dissemination tools allow the LiDAR data to be sent to Google Earth, PowerPoint presentations and 3-D PDF files.”

The solution can render LiDAR point clouds in excess of 1 billion points. “To accomplish this, we improved our algorithms to focus on the denser point clouds,” said Morris. “We also implemented a distributed processing system for the software. The software allows the program to process data on a cluster of workstations. This allows us to process extremely large data sets in a short amount of time. The output can be viewed in 3-D format, allowing users to pull the true value out of LiDAR.”

Blue Marble Geographics recently released a new version of Global Mapper, a geographic information system that works with all kinds of geospatial data, including LiDAR. “Most LiDAR data is delivered to Global Mapper in compressed format,” said Cunningham. “With the most recent release, we added our own compression and can now process hundreds of millions of point clouds. Most GIS software today works with only 5 or 10 million points at a time.”

Global Mapper 14.1 has built-in functionality for distance and area calculations, elevation querying, line-of-sight calculations, image rectification and other functions. The package increases LiDAR processing and display speeds.

Global Mapper Package files are now able to store LiDAR point clouds in a special compressed format, much smaller than the usual uncompressed data in LAS, a standard file format for the interchange of LiDAR data. “This allows LiDAR data to be efficiently archived or shared with other Global Mapper users,” said Cunningham. Blue Marble’s efforts to process larger volumes of data are based on its focus on better machine memory management, distributed processing across multiple machines, and storage techniques such as caching.

“We have focused in the last year on the ability to process large volumes of point clouds and manage them in a viewer so users can edit points to make sure they are classified correctly,” said Cunningham. “Users can also zoom in and out of images and pan around them.” Version 14.1 introduces a new reader available in a special multifunction format that contains a variety of attribute and surface feature information useful for military work.

Vertical Slicing

Applied Imagery recently released a new version of its Quick Terrain Modeler product, which adds tools to better analyze LiDAR point clouds, including the ability to view cross-sections.

“The tool allows a user to take a vertical slice through the point cloud,” said Chris Parker, the company’s president. “This enables a much faster and more comprehensive analysis of the point clouds and makes it easier to do things like measuring trees, power lines and building heights. This feature also enables users to edit and remove points and to change their classifications based on what they see in the profile.”

The ability to edit point clouds is important in cleaning up “noise” that may have shown up in the point cloud data, such as returns that derive from above or below the surface being examined. “Editing allows analysts to get rid of points that don’t belong,” said Parker.

A second function of the editing process is to change the point cloud classification. “Upon examination, an analyst may determine the system classified points in error, and would therefore want to reclassify those points to reflect what they really are, so that when he saves the edit of the point cloud and passes it on to a customer or a colleague, he can feel confident he has done the necessary quality assurance and that they have an accurate product that reflects reality,” said Parker.

Applied Imagery’s new release did not incorporate any new technology, but did move much of the processing to graphics processing units (GPUs). “GPUs enable real-time functionality, such as the ability to move a traveler down a path, to do line-of-sight analyses and to do route planning,” he continued. “Quick Terrain uses LiDAR to pan through 3-D terrain to avoid natural and man-made obstacles and dangers. This would have been impossible to do in real time in the past relying on a CPU.”

Merrick’s recent focus has been to improve the processing performance of its Merrick Advanced Remote Sensing, a Windows application used to visualize, manage, process and analyze LiDAR point cloud data. “We have automated the quality control steps to assure quality throughout the entire work flow and to decrease the labor hours involved in checking data by manual methods,” said Bethel.

Also in the last year, Merrick has been working a feature that fuses LiDAR data with imagery data without losing resolution. “The benefit of this process is the ability to view the full resolution of the imagery in a 3-D format,” said Bethel. “In the past we could fuse image colors into LiDAR point clouds but the color was always at higher resolution than the point cloud, and the result of the fusion was a loss of resolution. We are now able to preserve all of this information in high resolution and in three dimensions.”

Exelis has recently repositioned its products to include LiDAR analysis tools in its ENVI image analysis platform. “This is important because it allows visualization of 3-D point clouds and the analysis that goes along with that,” said Stefanacci. “We have also provided automated 3-D feature extraction. The built-in capabilities are for buildings, trees and power lines. That is a complex and difficult process, which has now become a pushbutton tool.”

Exelis VIS has also released an application programming interface that allows users to combine LiDAR data with images and image analysis in a single work flow within the ENVI system. The functionalities that can be deployed on a desktop can now also be implemented in an enterprise environment.

CG² has automated techniques used for LiDAR processing that go beyond the typical 2-D-rooftop-plus-height feature extrusion, according to Vaquerizo. “Viewing the scene is only the beginning,” she said.

“The point cloud is automatically clustered into individual identifiable objects which are automatically assigned a classification. This information can be used to highlight objects of interest and to compress the data based on the interpreted structure, such as a planar surface.”

BAE Systems’ SOCET GXP version 4.0, the company’s full-spectrum GEOINT tool, released last year, was the first version that integrated the capability to visualize LiDAR point cloud data. “Digital elevation models in TIN and Grid formats can be derived from LiDAR point clouds,” said Rosengarten, “Once integrated in the GXP platform, existing functionality can be used to do editing, analyses, modeling and texturing to better refine those data sets.”

SOCET GXP’s automatic feature extraction functions enable users to generate 3-D models with little human intervention. “We extract both buildings and trees,” said Rosengarten. “The buildings extracted are not just rectangles. We capture roof details and make full 3-D models out of LiDAR point clouds.”

GXP also enables users to layer imagery on top of the LiDAR surface models to enhance the visualization of the area being studied. “The images can be used to view the area from different angles, to add visualization to buildings, and to correct any errors. All of this can be exported to Google Earth and shared across the intelligence community.”

SOCET GXP version 4.1, to be released later this year, will allow users to perform measurements on the 3-D models. “They will be able to calculate things like surface areas and perimeter, and to use the 3-D model to make products like 3-D GeoPDF files and PowerPoint presentations,” said Rosengarten. Other enhancements expected in version 4.1 include increasing the speed and performance of automatic feature extraction using graphic processing unit technology “This means utilizing graphics card technology to increase algorithm performance timelines,” said Rosengarten.

BAE’s answer to LiDAR’s big-data problem includes a rewrite in version 4.0 of its 3-D multiport to accommodate tens of millions of data points “with no significant performance degradation,” said Rosengarten. “Version 4.1 will also have 64-bit computing, as compared to the 32 bits in current versions of GXP, so that we can take advantage of increased computer memory.”

BAE’s GXP Explorer also helps with LiDAR’s big-data sets by crawling metadata and directories and cataloging user data. “That helps users find data and see what is relevant to their problem,” said Rosengarten.

Human Intelligence

Virtual Geomatics has released a LiDAR feature extraction product that combines computer processing with human intelligence. “You tell the software what you want to extract, the software does it for you, and you say ‘yes’ or ‘no,’” said Ramesh Sridharan, the company’s chief technologist. “BNSF Railway is using this to collect data on 15,000 miles of track.”

Sridharan’s company recently introduced PanoLiDAR Viewer, a tool designed to visualize the LiDAR point cloud along with the 360-degree panoramic images collected by laser scanning systems. “The overlay of point cloud and corresponding images allows for accurate measurements and asset extraction,” he said. “With the point cloud embedded, features in the image can be picked instantly.”

PanoLiDAR offers several functions designed to automatically extract features associated with roads, such as edge of pavements, road lines, signs, lamp posts, trees and manholes. The user can import LiDAR in a native format, and create outputs in different mapping systems. Virtual Geomatics is currently working on infusing more automation in its tools, making them easier to use and increasing analyst productivity.

Exelis VIS is working on finding synergies among its ENVI LiDAR and non-LiDAR tools as well as with other technologies to be found within the Exelis parent company. “We will continue to automate feature- extraction processes and will continue to develop application programming interfaces to give users more control over their work,” said Stefanacci.

BAE’s goal is to integrate LiDAR capabilities with other remote sensor technologies. “The approach we take is to see how LiDAR data can be used with other data sets to solve problems,” said Rosengarten.

For example, tools within SOCET GXP can detect differences in the same terrain between two different LiDAR sensor passes, and calculate, for example, how much material, such as natural resources, has been removed over time. “Then we can use data from hyperspectral and multispectral sensors to identify what the material is that is being shifted,” said Rosengarten. “The point is that LiDAR data can be used with other remote sensor capabilities to help answer intelligence questions.”

“We’ve done a good job at improving LiDAR processing and analysis tools but we are not done by any stretch,” said Cunningham. “Hardware capabilities need to be stepped up to better process and deliver data. When you are dealing in terabytes of data you have to deliver customers a hard drive because networks don’t have enough bandwidth to transmit the volumes of data effectively. Online cloud services are really not yet a vehicle for processing and transmitting LiDAR point cloud data because of bandwidth issues. Eventually the ability to consume LiDAR data over the Internet will emerge.”

On the software side, multicore processing will be supplemented by other techniques, according to Cunningham, including the use of video cards and tools that automatically and intelligently trim the LiDAR data without devaluing the data. “This is the year of LiDAR processing,” he said. “We have taken some first steps and we will be taking more steps and releasing more tools for more powerful processing and 3-D management.” ♦

Last modified on Monday, 20 January 2014 16:15

Additional Info

  • Issue: 4
  • Volume: 11
back to top