Process_Description:
A combination of computer-controlled navigational equipment enables a fully automated flight control system for the plane and camera, allowing flight lines and image centers to be pre-computed and programmed to airborne global positioning system (ABGPS) coordinates and altitude. With a complete understanding of the project's specifications, acquisition technicians will perform a thorough analysis of environmental conditions (e.g., topography and hydrography). Technicians will determine final flight specifications such as airspeed, flight height, scan width, and pulse rate. Ground cover (natural and cultural features) within the project area will also be considered to identify density and type of vegetation and buildings. This analysis will not only help to determine some acquisition parameters but will also later assist in determining the optimal algorithms to be used during the automated processing stage. Mission planning consists of several steps that ensure proper flight preparation. First and foremost, the project boundaries are imported into the flight planning software. Following this, available information such as elevation data, vegetation coverage data, and cultural feature extents of the area are reviewed. General assessments are made by certified photogrammetrists to determine the proper LiDAR system settings. Project aircraft will be guided by a GPS-controlled flight management system. During the mission, the crew will monitor all functions involving the operations and guidance systems, allowing for continuous onboard quality assurance. The only weather to affect LiDAR collection is precipitation and cloud development below the flight plan. LiDAR collection will be suspended during rain, snow, strong winds, or low clouds. LiDAR collection will take place during the window between spring snow-melt and emergent-leaf conditions. The position of the aircraft is monitored at all times using the flight management system to maintain 30% side-lap to within +/- 5%. Tilt and crab will be monitored during flight operations. Tilt will be held to within 4 degrees from level, no more that 1 degree average for the entire project, and less than 5 degrees between frames. Tilt will also be monitored during post-processing to ensure compliance. The values for degrees heading and tracking are used to determine the crab offset. This offset will be applied to the LiDAR system to obtain less than 5 degrees from the line of flight. Coverage verification is achieved using the post-flight processed GPS solution. Both horizontal and vertical aircraft position and orientation will be verified and plotted against the 3-D flight plan to ensure proper coverage of topographic data. After each flight, the data is field-checked for coverage and quality before shipping the data for processing. The LiDAR data is pre-processed in the field for quick projection of data acquisition swath coverage. Any seams, holes, or other unwanted artifacts can be quickly identified at this stage to assess the need for any data acquisition reflights. The flight operations team will remain on-site during this phase until all required project data is acquired. A site is selected at the nearest airport to the project area to validate the system calibration. The calibration site is flown before each data acquisition flight mission. Two opposing flight lines, as well as a cross-flight, will be flown to identify any potential issues in the system calibration. A system calibration file is generated with each data acquisition flight mission and is used in the processing of all LiDAR data. Upon completion of the LiDAR acquisition mission, the ABGPS and inertial measurement unit (IMU) data is processed and synchronized. Applanix Corporation's POSPac software will be used for the integration of GPS and IMU data to create an SBET. The SBET describes the precise position and attitude of the plane at each moment for the full duration of the mission. Technicians will use a database management system for housing the LiDAR dataset (usually multiple gigabytes in size). Additionally, the program incorporates a thorough checklist of processing steps and quality assurance/quality control (QA/QC) procedures that assist the technicians in the LiDAR workflow. After technicians establish the SBET, they apply the solution to the dataset in order to adjust each point based on aircraft position and orientation during LiDAR acquisition. A second set of software is used to automate the initial classification of the LiDAR point cloud based on a set of predetermined parameters. It is at this point that technicians will refer to the ground cover research (natural and cultural features) within the project area (some algorithms/filters recognize the ground in forests well, while others have greater capability in urban areas). During this process, each point is given an initial classification (e.g., as ground, water, vegetation, or building) based on the point's coordinates and its relation to its neighbors. Classifications to be assigned include all those outlined by ASPRS standards. The initial values offer a coarse and inexact dataset but are a good starting point for the subsequent manual classification procedure. It is during this step that overlap points are automatically classified (those originating from neighboring flightlines). This classification is based on information gathered from the ABGPS and IMU data. After hydrographic breaklines have been collected using LiDARgrammetry, polygons are formed to represent open water bodies, including lakes, streams wider than 8 feet, and ponds. All LiDAR points that fall within these areas are classified as water. It is after this point that a manual classification of the LiDAR points occurs. The software permits technicians to view the point cloud in a number of ways and within a number of contexts provided by other available datasets such as orthos or road centerline files. After querying specific tiles into their workspace, technicians will began scanning through the tile, searching for areas where the automated classification routine erred. They will view the datasets from numerous angles, generate triangulated irregular networks (TIN) or contours, and consult shaded relief models to fully understand the data model at each point of their analysis. Using this information and technical experience, our technicians will detect and investigate any areas that display anomalies i.e., areas where the automated classifications show weakness. The technicians will systematically work their way through these anomalous areas: taking narrow swaths of data, viewing the swath from isometric and profile viewpoints, reclassifying each point in accordance with their judgment. Eligible classifications are focused upon. These include unclassified (buildings, vegetations, and other non-terrain/non-noise features), ground, and noise. Points falling under the water and overlap categories are weeded out effectively during automated procedures. After this process is complete for the area, the technician will generate new TINs and contours to determine that the anomaly has been fully addressed. If it has, the technician will continue scanning the tile for other anomalies. This procedure will continue until the tile has been determined to meet or exceed project specifications. At this point, the technician will query the next adjacent tile and continue working.