一種支撐式管道檢測機器人的設計含6張CAD圖
一種支撐式管道檢測機器人的設計含6張CAD圖,一種,支撐,管道,檢測,機器人,設計,CAD
Detection of morphology defects in pipeline based on 3D active stereo omnidirectional vision sensor
Abstract
There are many kinds of defects in pipes, which are difficult to detect with a low degree of automation. In this work, a novel omnidirectional vision inspection system for detection of the morphology defects is presented. An active stereo omnidirectional vision sensor is designed to obtain the texture and depth information of the inner wall of the pipeline in real time. The camera motion is estimated and the space location information of the laser points are calculated accordingly. Then, the faster region proposal convolutional neural network (Faster R‐CNN) is applied to train a detection network on their image database of pipe defects. Experimental results demonstrate that system can measure and reconstruct the 3D space of pipe with high quality and the retrained Faster R‐CNN achieves fine detection results in terms of both speed and accuracy.
1 Introduction
Pipeline is widely used to transport crude oil, natural gas, refined oil product, CWM etc. However, an increasing number of these pipes become destroyed by ageing and internal damage comes to existence. If the damage grows large, serious accidents might occur leading to the heavy loss. For the fact that most pipes have a diameter of 700?mm or less, manual vision inspectors cannot enter these pipes directly. Therefore, automatic inspection by robots which overcome spatial constraint is feasible to measure the pipe and great effort has been done in this domain to improve the real time, accuracy, and robustness of the inspection algorithm.
Considerable work has been done in non‐destructive detection and visual‐based method for detection and modelling of deformations/dents in structures such as pipes. Non‐destructive detection, for example ultrasound detection [1], magnetic flux leakage detection [2], eddy current detection and ray detection [3], is the branch of engineering concerned with non‐contact method for measuring and evaluating defects in pipes. Ultrasound detection method determines the size and location of the defects through reflected ultrasound signal. By testing whether the magnetic field lines curved, magnetic flux leakage detection method find the internal detects of the pipe. Eddy current detection method determines the deformations based on changes of magnetic field intensity. Ray detection method is accomplished by real‐time imaging of alpha rays, gamma rays and neutron rays. Depending on the transmitting and receiving of the signal repeatedly, all the methods above cannot provide any visual clues, thus leading to the low detecting efficiency.
On the other hand, visual detection [4] is a new method forward used extensively to evaluate the condition or the quality of a component. Typically, automatic detection of robot equipped with cameras is desirable and easy to perform. However, many detection systems have to rotate the camera to record images because normal cameras with narrow view can see only a direction but a pipe is a cylindrical geometry. Promisingly, panoramic imaging technology and omnidirectional vision sensors with a wide‐field of view afford available benefit to 3D measurement and reconstruction. Kannala?et al.?[5] introduced a pipe detection system using omnidirectional vision sensor that can efficiently take images of 360° in surrounding at a time. Kannala?et al.?[5] used the SFM (structure from motion) method to estimate the camera motion with 3D measurement. For the complicated background of internal pipes, it is not ideal to extract and track the feature points to acquire the position relation between two images. A structured light visual method [6-10] which is a kind of active measurement has been proposed to solve the problem by already calibrated model. Zhanget al.?[11] used circle structured light system to reconstruct the shape of a steel pipe. Laser is projected using a mirror and laser ring streaks recorded by CCD camera are displayed on the inner surface of the pipe. The proposed method extracts point cloud from single image without getting corresponding points. However, the position and orientation of the camera are asked to be constant while measurement. Accordingly, it is hard to integrate the measurement results if the camera moves.
Two key problems of pipeline defect detection are as follows: (1) acquisition of texture information of inner pipe for defects detection such as crack, corrosion, roots. (2) 3D coordinate calculation of point cloud of the pipe for deformation detection. In this work, a vision detection system is proposed to realise both of these goals. The main techniques behind the system are as follows: (1) obtain laser slice panoramic images and texture panoramic images simultaneously; (2) 3D measurement and reconstruction of pipes in the basis of computer vision. (3) Four detects (crack, corrosion, roots, and dark branch) are detected and classified based on Faster R‐CNN.
2 System overview
The active stereo omnidirectional vision sensor (ASODVS) system is composed of three sections such as crawling robot, microcontroller, omnidirectional vision sensor (ODVS) and omnidirectional laser generator, assembled at the top of the ODVS. With the exception of the crawling robot, other components are the original design of our laboratory. The omnidirectional laser generator contains one circular laser and a conical mirror, which reflects the laser light vertically into the inner wall of the pipe. Thus, the laser of the generator can cover a whole plane during one scanning. Fig.?1?shows the structure of the system. The detection robot travels along the pipe, thus during one strip, the whole 3D environment can be scanned. The laser points and the texture information of the inner pipe in panoramic images can be captured by the ODVS. The laser light reflects the real shape of the pipe so that it can be used to calculate the coordinates of the points of the inner wall. The texture panoramic obtains the texture and colour information and can be used for defects detection such as crack, corrosion, roots.
Fig. 1Pipeline visual inspection robot
2.1 ODVS with single viewpoint
The custom‐built ODVS is a key component of the system. An omnidirectional picture of the environment can be obtained by a hyperbolic mirror of the ODVS with a horizontal viewing angle of 360° and vertical viewing angle of 120°, respectively [12,?13]. The catadioptric properties of the hyperboloid mirror are used to achieve the ODVS single view characteristics. The angle of incidence is required to calculate the distance of each pixel. So camera calibration is inevitable.
By calibration, the relationship between the internal parameters: internal geometry and optical properties of the camera and the external parameters: coordinates of the camera in the real world can be established. The calibration algorithm can be obtained in [14].
The projection model proposed by Micusik?et al.?[15] is used to set up a map between the image plane and the real world, as shown in Fig.?2. It mainly consists of two different reference planes: sensor plane??and image plane?. The sensor plane, as shown in the left part of Fig.?2, is assumed to be perpendicular to the optical axis of the mirror. The intersection of the plane and the optical axis is the origin point. The image plane represented by pixel coordinates is related to the CCD video camera.??is the origin of the coordinate system, and the axis aligns with the optical axis of the mirror. In Fig.?2,??is a point in the 3D real‐world coordinate system, and??is a projection point of?X?in the sensor plane, and??is the corresponding pixel in the image plane.
Fig. 2Detection robot of pipe based on ASODVS
By projective transformation matrix,?X?can be obtained according to?A?of the mirror.?A?is the focus of the optical centre?C?of the camera. In the image plane,??is the affine transformation of?, which is an intersection of?C?and the sensor plane. Classically, the relationship between the real‐world 3D space and the pixels of the image plane can be established by the single‐viewpoint catadioptric camera imaging model.
The transform function can be obtained by the following equation:
(1)
In this equation,??is the projective transformation matrix,??is the rotation matrix, and??is the space to mirror transformation matrix.
Mathematically, the transformation function from the sensor plane to the image plane is expressed by the following equation:
(2)
A modified perspective projection model is proposed by Scaramuzza?et al.?[16]. The transform relationships can be described by the following equation:
(3)
where??is a function used to describe the rotational symmetry of hyperboloid mirrors.?f?can be derived by using Taylor expansion
(4)
where??is the distance from the point on the image plane to the centre of the panorama.
The best catadioptric camera imaging model is obtained by Scaramuzza and Micusik's models. However, in actual application, there are errors and deficiencies in the process of assembling ODVS. Equation (5) is simplified model using the model with errors presented by Scaramuzza
(5)
The ODVS is rotated around a circle and takes a series of panoramic images during the process. The mapping equations from 3D space locations to the pixels in the image plane are derived. As illustrated in Table?1, the calibration results of an ODVS are obtained. The angle of incidence of each pixel in the image plane can be obtained by the internal and external parameters of ODVS as shown in the following equation:
(6)
where??is the angle of incidence,??is the distance from the points to the centre of the image plane, and??are internal and external parameters of ODVS. A mapping table is described between the image plane and the angles of incidence for less calculation.
Table 1.?ODVS calibration results
Calibration parameter
Centre point
α0
α2
α4
ODVS
?110.210
0.0023
?0.000
Calibration parameter
A
t
Center point
ODVS
0.9999-10.1e-005-9.1e-0051
49.461-15.692
587.096348.682
2.2 Omnidirectional laser generator and ASODVS?+?ODVS
This design goal is a rapid, direct and synchronisation acquisition all the panoramic space geometry, texture and colour information in pipeline. Therefore, it is necessary to integrate ASODVS with ODVS. Due to the limitation of pipeline space, it is necessary to design a compact and miniaturisation design. ASODVS?+?ODVS design is shown in Fig.?3a. The model of the ASODVS?+?ODVS is shown in Fig.?3b.
Fig. 3Design of the ASODVS?+?ODVS in this paper
(a)?Schematic drawing of ASODVS?+?ODVS,?(b)?Model of the ASODVS?+?ODVS
As shown in Fig.?3b, the infrared filter is installed in the middle between the camera unit 1 and the camera unit 2. The camera unit 2 is installed in the real focus of the hyperboloid mirror, and the camera unit 1 is installed in the virtual focus of the hyperboloid mirror. In the real focus, the laser slice panoramic images are imaged by ASODVS, while in the virtual focus, panoramic images with texture and colour are obtained by ODVS.
The advantage of this design is that two panoramic cameras share catadioptric mirror, so it has the same panoramic imaging parameters, easier to integrate panoramic colour texture image and laser panoramic image information, and their imaging does not interfere with each other. In addition, the colour texture panoramic images and the laser slice panoramic images can be processed at same time, and all the panoramic space geometry, texture and colour information in pipeline can be obtained fast, directly and synchronously.
As shown in Fig.?4, the central wavelength of the filter is 760 nm, thus, the visible light (red line in Fig.?3b) is transmitted through the filter, and the infrared light (blue line in Fig.?3b) is reflected on the filter. Thus, the visible light is reflected in the imaging unit 1 through the filter after the reflection of the hyperboloid mirror, and the infrared light emitted by the infrared laser transmitter is reflected in the image of the camera unit 2 after being reflected by the hyperbolic mirror. Due to sharing a hyperboloidal catadioptric mirror, and the midpoint filter configuration in hyperbolic mirror real focus and virtual focus, so as long as the parameters of the two cameras unit set (selected) agreement, the colour texture panoramic images can be correspond to the laser slice panoramic images, and synchronisation and consistency of the two kinds of panoramic images can be ensured.
Fig. 4Wavelength characteristics of infrared filters
(a)?Wavelength characteristics of infrared filter,?(b)?Transmission and reflection of infrared filter
In view of the above analysis, the imaging unit 1 and the camera unit 2 are selected for the same type of resolution.
Fig.?5?shows the laser slice panoramic image and texture panoramic image.
Fig. 5Laser slice panoramic image and texture panoramic image
(a)?Omni‐directional texture image,?(b)?Omni‐directional infrared image
2.3 Panoramic images acquisition controlling system
In this work, industrial transportation pipeline is the detection object and the defect detection controlling system is designed as follows:
· Step 1:?the pipeline inspection robot is placed at the bottom of the pipe, the diameter??and length??of the pipeline is input. The initial value of the length recorder is?. The moving step of the robot is?. To ensure that the panoramic texture images are continuous (there are overlapping regions in two consecutive frames), the moving step is calculated as
(7)
where??and??are the angle of elevation and depression of ODVS.
· Step 2:?moving distance of robot is calculated as
(8)
where?L?is current length of length recorder. When?, ODVS captures the panoramic texture image, saved as ‘?’.
· Step 3:?the robot moves on uniformly,?.
· Step 4:?ODVS captures laser panoramic images at a fixed frequency, the current movement length is??and the panoramic images are saved as ‘?’?.
· Step 5:?if?, jump to step 4.
· Step 6:?if?, jump to step 2.
· Step 7: Stop.
3 3D coordinates calculation based on ASODVS
ODVS takes panoramic image sequence with laser light of cross‐section and then the 3D coordinates of the points can be calculated as following method. Fig.?6?shows the geometry measurement based on ASODVS, establish Gaussian coordinate with the origin of single‐view point?, the spatial location of the point loud can be represented with?, wherer?is the distance between object point?P?and the origin?;??is the incident angle from pointP?to the origin?;??is the azimuth angle of point?P?relative to the origin?. The above parameters satisfy
(9)
where??is the distance between point corresponding to object point?P?and the centre in imaging plane;?H?is the projection distance of point?P?and point??to?Z‐axis. Accuracy of the incident angles greatly depend on accuracy in calibration and extracting of laser projection, so extracting the incident point from panoramic image sequence is the key area of the proposed method.
Fig. 6 Point cloud geometry measurement based on ASODVS
The laser light extracting is the key to calculate 3D coordinate, frame difference method is used in this paper to get laser lines. Two consecutive panoramic images acquired as the robot marches along the pipe have obvious difference, if the absolute value of the difference in brightness is larger than the set threshold, then extract.
The laser light reflected from the internal pipe surface is possible discontinuity and with some width. Therefore, we use the Gaussian approximation method to extract the peak (the pixel that has highest intensity). The three brightest laser points in succession are selected and the sub‐pixel offset?d?is calculated from these values as
(10)
where??means intensity value at?i?in which?i?is an image coordinate of the observed peak intensity. As a result,??is obtained as the image coordinate of the laser light.
To connect the discontinuous edges, through (11) and (12), we compare the responsive intensity and gradient direction of the gradient operator to determine whether two points are on a line
(11)
(12)
where??means the gradient value of the boundary point,??means the gradient value of the unconfirmed point,??means the gradient threshold,??means the direction angle of the gradient vector of the boundary point,??means the direction angle of the unconfirmed point,??means the direction angle threshold. When both of these equations are workable, the unconfirmed point belongs to the boundary and a complete laser light can be acquired. As a result, 3D coordinate of measurement point can be calculated by (2), the location is described in Cartesian coordinate as
(13)
4 Camera motion estimation
4.1 Corresponding point acquisition
In this work, speed‐up robust features (SURF) algorithm is used to extract and match the feature points of panoramic texture images [17]. SURF algorithm is an improved algorithm than a scale invariant feature transform algorithm (scale‐invariant feature transformation) with higher matching precision and better stability of [18].
Although SURF has a higher matching accuracy, but the texture panoramic image taken by ODVS is with distortion, so there is still a false matching point. The random sample consensus [19] algorithm is used to remove the false matching points (Fig.?7).
Fig. 7
Feature‐mapping results of two adjacent panoramic texture images
4.2 Estimation of camera motion
The matrix that contains relative position and orientation between two observation points is calculated to estimate cameras motion.
Let the translation??and the rotation??relate the coordinate systems??and?, see Fig.?8. Let the projections of a 3D point??onto the mirrors be denoted as?,?. The coplanarity of vectors?,??and??can be expressed in the coordinate system??as
(14)
where × denotes the vector product. The coplanarity constraint (14) is rewritten in the matrix from as
(15)
where
(16)
(17)
Fig. 8
Opipolar geometry of ODVS
Equation (13) is transformed into (18)
(18)
where?,?. Essential matrix??is derived from simultaneously solving equations for more than eight pairs of corresponding ray vectors. The function in the following equation should be minimised:
(19)
where?. The matrix?E?is derived from??which is given as the eigenvector of the smallest eigenvalue of?. The rotation matrix?R?and translation vector??can be derived from the matrix?E.
5 In‐pipe defects detection based on Faster R‐CNN
As mentioned above, the ODVS can obtain laser slice panoramic images and texture panoramic images in a single scan. The laser slice panoramic images are used for 3D coordinate calculation of the inner pipe point cloud, and the texture panoramic images are used for defects detection such as cracks, corrosion, roots, and dark braches.
In this section, we apply Faster R‐CNN to train a detection network on our database of pipe defects, and identify four classes of defects and predict object bounds of them simultaneously.
5.1 Preprocessing of texture panoramic images
In order to eliminate the image distortion caused by panoramic imaging, the expansion algorithm is used for panoramic images.
Firstly, the coordinate system is set up by the centre of the panoramic image?, the inner diameter is?r, the external diameter is?R, radius of the middle circle is?, and azimuth is?. Then, the coordinate system of cylindrical expansion image is set up, the coordinate centre is?,?x‐axis?,?y‐axis?, the panoramic image is unfolded in a clockwise direction according to the azimuth angle. Establish a one‐to‐one correspondence between two points of panoramic image and expansion image.
The formulas are as follows, in the process of calculation, the gray value of non‐integer coordinates is calculated by bilinear interpolation
(19)
(20)
where??and??are, respectively, the abscissa and ordinate of the unfolded image,?and??are, re
收藏