Automation Control Systems - Курсовая работа

бесплатно 0
4.5 26
Introduction to Simultaneous Localization And Mapping (SLAM) for mobile robot. Navigational sensors used in SLAM: Internal, External, Range sensors, Odometry, Inertial Navigation Systems, Global Positioning System. Map processing and updating principle.


Аннотация к работе
The simultaneous localization and mapping (SLAM) problem aims to create a mobile robot that would be placed at an unknown location in an unknown environment and for the robot to incrementally build a consistent map of this environment while simultaneously determining its location within this map. A key element of this work was to show that there must be a high degree of correlation between estimates of the location of different landmarks in a map and that, indeed, these correlations would grow with successive observations. Several groups already working on mapping and localization, notably at the Massachusetts Institute of Technology [7], Zaragoza [8], [5] the ACFR at the Sydney [20], and others began working on SLAM - also called concurrent mapping and localization (CML) at this time - in indoor, outdoor, and subsea environments. Recent advances in simultaneous localization and map building for mobile robots have been made using sonar and laser range sensing to build maps in 2D and have been largely overlooked in the vision literature. An alternative method for obtaining an estimate of the vehicle location is to measure the relative position of a known object in the environment using a sensor mounted on the vehicle.After analyses of EKF and RBPF I made a conclusion that these filters are complex for making the analyses of surrounding environment and require powerful computer complex, that’s why I decided to use range measurement algorithm, in order to localize and position robot in surrounding environment. For example, if the distance between a pixel and the second color marker were the smallest, then the pixel would be labeled as that color. It is visible from the histogram that the greatest number of pixels belongs to background (white and grey colors), and there are 5 colors, which are above threshold value (that is 500 pixels). From results, illustrated in figure 3.6 we can make a conclusion that for the best segmentation results we have to consider a number of colors, present at the image, and quantity of each color, i.e. how many pixels of each color are present at the current image. After 20 experiments, similar to those, represented at figures 3.3-3.6, it was made a conclusion that it is highly recommended to define a number of colors, which exceed the threshold value in the histogram of color distribution, i.e. colors which are mostly present at the current image.Modern methods and means of navigation for the autonomous mobile robot have been analyzed in the given work. Matching to distinctive objects of the surrounding environment (including visual features) is used in different spheres, for example at solution of tasks of simultaneous localization and mapping, known as SLAM (Simultaneous Localization And Mapping). The goal of this work was to analyze different methods and means of visual positioning and to create a system for autonomous mobile robot, which would be able to separate distinctive objects of the environment, determine distance to these objects and position itself, relatively to these objects. Under distinctive objects we understand objects, which have certain color and geometrical characteristics.

Вывод
After thorough investigation of all possible solutions the web-camera A-4 Tech PK-635 was chosen as a vision sensor, because of its technical characteristics: sufficiently high resolution, capture speed and low cost.

In order to process the image methods of image segmentation and of geometric feature extraction were applied, as combination of these methods gives high accuracy of investigation.

After analyses of EKF and RBPF I made a conclusion that these filters are complex for making the analyses of surrounding environment and require powerful computer complex, that’s why I decided to use range measurement algorithm, in order to localize and position robot in surrounding environment.

3. Development of visual system of SLAM

3.1 Selection of technical means of developed system

3.1.1 Vision sensor used in developed system

Web-camera description

IMG_bebbf30c-5a41-428a-b7c2-b2e61106c437

Figure 3.1 - Web-camera A-4 Tech PK-635, 350К pixels

Company Name: A4 Technology

Model: A4Tech PK-635 Camera Driver Windows 9x/ME/2000/XP/2003

Operating System: Windows 9x/ME/2000/XP/2003 Description: A4Tech PK-635 Camera Driver Windows 9x/ME/2000/XP/2003. Design: Movable protective cover of the objective, flexible support Viewing angle is 54°

Sensor CMOS, 300K pixels

Capture rate is 30 fps

Exposure is automatic

Button: button of the snap receiving

Resolution is 640Ч480

Distance to the object - from 10 cm

Interface - USB 1.1

Builtin microphone: is present

3.1.2 Onboard computer selection

There are the hardware and software requirements for the given project: 1. CPU = 1.666Hz

2. Memory = 128Mb

3. Videocard = 64Mb

4. Monitor (17")

5. Web-camera (robot onboard)

6. Operating system = Windows 2000/XP

7. LPT-port must be present and work in computer.

8. Compiler of language C in Matlab

3.2 Development of color segmentation algorithm

3.2.1 Algorithm of image color segmentation

The first step is to classify each pixel using the Nearest Neighbor Rule. Each color marker now has "a" and a "b" value. It is necessary to classify each pixel in the image by calculating the Euclidean distance between that pixel and each color marker. The smallest distance will show that the pixel most closely matches that color marker. For example, if the distance between a pixel and the second color marker were the smallest, then the pixel would be labeled as that color.

After that the results of nearest neighbor classification can be displayed. For this the label matrix is created and it contains a color label for each pixel in the image. We use the label matrix to separate objects in the original image by color.

The next step is to check and analyze the reaction of our system on different colors using the values of color markers.

IMG_f0715ebe-405f-434e-b779-9e2927c1a8f2

Figure 3.2 - Block diagram of image color segmentation algorithm

Then we must calculate the object’s perimeter and area, and using these information compute the roundness metric. After that we set the threshold of metric, and compare metric of all closed objects specified metric in order to exclude objects that are not related to the markers. It is necessary to remove area, which has the form different to circular one.

The next step is to remove closed objects with area less than set threshold that are considered to be noises.

3.2.2 Investigation of factors, which influence on color index

As it was mentioned before, L*a*b color model grounds on 3 components of color: "L" (which is luminance or lightness), "a" (chromatic component green to magenta), and "b" (chromatic component blue to yellow). It was supposed that luminance component determines lightness of the current image, and doesn’t contain information about the color parameters. Based on this assumption the segmentation algorithm was developed, which illustrated the next results (segmentation was made for red, green and yellow colors by means of "a" and "b" color components).

IMG_997236a1-ad06-45c3-aac7-2f9f5869e0a8

Figure 3.3 - Snapshot, made by web camera

IMG_81b29e00-c0a8-4e21-adeb-19324e460b5d

Figure 3.4 - Results of segmentation for red, green and yellow colors, considering "a" and "b" components of color

It is visible from received results that segmentation procedure doesn’t give the required results and accuracy.

The next step was to analyze histogram of color distribution in the current image.

IMG_5c1e82ca-f234-41e7-97b5-7a1fa442c050

Figure 3.5 - Histogram of color distribution at the image, represented by figure 15

It is visible from the histogram that the greatest number of pixels belongs to background (white and grey colors), and there are 5 colors, which are above threshold value (that is 500 pixels). These colors are: red, yellow, green, pink and blue. Also there are 2 colors, which are below threshold value, which means that the number of pixels of these colors is very low.

During the investigation, it was noticed that for the same color marker lightness component changes with the change in background color. This means that lightness has also to be taken into account when determining color index. The following table partially shows the results, obtained during the investigation.

Table 3.1 - Values of color components for red, green and yellow colors at different backgrounds

BackgroundRED GREEN YELLOW

LabLabLab

White8316715920486181245110207

Black14317716221183181246112195

Red971651572177217223797192

Orange991681551907417222993185

Mixed10417816722282167226106195



That’s why the next step was to add "L" component and to consider segmentation, having information not only about red, green and yellow colors, but also about pink, blue and 2 colors of background (white, grey). Results were the next:

IMG_0fb79f89-abe2-4586-af6a-42788dc920cd

Figure 3.6 - Results of segmentation for red, green, yellow, pink, blue, white and gray colors, considering "a", "b" and "L" components of color

From results, illustrated in figure 3.6 we can make a conclusion that for the best segmentation results we have to consider a number of colors, present at the image, and quantity of each color, i.e. how many pixels of each color are present at the current image. After 20 experiments, similar to those, represented at figures 3.3-3.6, it was made a conclusion that it is highly recommended to define a number of colors, which exceed the threshold value in the histogram of color distribution, i.e. colors which are mostly present at the current image. And, considering this information, perform image segmentation by previously defined colors. This principle will significantly improve accuracy of segmentation, and, as a result, decrease error during positioning. As it was illustrated at the figure 3.4, accuracy of segmentation is rather poor, as all colors, which are above the threshold value, are not considered. The figure 3.6 shows the results of segmentation, which takes into account all colors, above threshold value. And, comparison of obtained results proves that accuracy of segmentation for figure 3.6 is much higher than for figure 3.4.

3.3 Development of algorithm of geometric characteristics determination

3.3.1 Transformation from L*a*b into binary form

After the segmentation procedure I obtained image, in which black color corresponds to background color of image and other colors - to segmented ones.

IMG_b8c32627-5dc9-463e-b83d-0c7a5adc7fe2

Figure 3.7 - Image after segmentation

It is more convenient to determine geometric features of objects having binary image. So, the next step was to transform L*a*b image into binary one. The next algorithm was used for this transformation.

IMG_281a07ee-6b15-497d-b681-577d66a51b60

Figure 3.8 - Block diagram of transformation form L*a*b into binary form algorithm

3.3.2 Geometric characteristics determination

In order to improve the accuracy after color segmentation procedure, method of geometric features determination is applied. So, after the color segmentation the image is transformed into binary image, where 1 corresponds to color marker (white color) and 0 corresponds to background color of image (black color). In the algorithm threshold value of noise was introduced, which is important with the purpose to filter small noises, which appear on the image. The next step is to calculate such parameters of each obtained object, as area and roundness.

It is well known that the roundness of a circle corresponds to 1, and the roundness of ellipse may be in range from 0 to 1. Taking into account errors, obtained due to transformation from L*a*b into binary image, it is understandable that the circle is not perfect, and roundness will lie in range from 0 to 1.

In order to filter object that are not even close by their shape to circle I set the threshold of roundness, which means that all objects with roundness less than the threshold value will be filtered out and considered to be errors, obtained after image segmentation.

IMG_fea9168a-6c8b-4df3-bd4a-097ffe7c9fa6

Figure 3.9 - Block diagram of geometric characteristics determination algorithm

If the object will keep at least one it’s parameter during the process of motion then this object is considered to be such, which can be used during visual positioning.

3.4 Localization algorithm

3.4.1 Local coordinate system

In the figure 3.10 is illustrated the local coordinate system of camera and its Field Of View (FOV). It shows the top and side views of a robot (not to scale). Notice that there is a blind area immediately in front of the robot. Also, the top of the camera image in this diagram is above the true horizon (the horizontal line through the centre of the camera). In general, the horizon is not at the centre of the image because the camera is tilted downwards to improve the visibility of the foreground.

IMG_9018eade-7942-413a-8cd0-9acc66d1fb5f

Figure 3.10 - Local coordinate system of camera, and camera FOV

In the FOV diagram [4, 6], ? is one-half of the vertical FOV; ? is one-half of the horizontal FOV; and ? is the camera tilt angle. (? and ? are related through the aspect ratio of the camera, which is usually 4:3 for conventional video cameras.) If the image resolution is m by n pixels, then the values of the image coordinates (u,v) will range from 0 to (m-1) and 0 to (n-1) respectively.

Consider rays from the camera to points on the ground corresponding to successive scanlines in the camera image. Each pixel in this vertical column of the image corresponds to an angle of 2?/(n-1). Similarly, pixels in the horizontal direction correspond to an arc of 2?/(m-1).

The following relationships can be easily determined from the diagram: ? ? ? = 90 (3.1) tan(?) = b / h (3.2) tan(? ?) = (b d) / h (3.3) tan(?) = w / (b d)(3.4)

The values of b, d and w can be measured, although not very accurately (to within a few millimeters) by placing a grid on the ground, and h can be measured directly. For any arbitrary image vertical coordinate, v, the distance along the ground (y axis) can be calculated using the following formula (Equation 3.5). Note that, by convention, the vertical coordinates, v, in an image actually count downwards from the top. This affects the formula. y = h tan( ? 2? (n - 1 - v) / (n - 1) ) (3.5)

If ? is sufficiently large, then eventually y will progress out to infinity, and then come backwards from minus infinity. (This is a result of the tan function, but in geometrical terms it means that a ray through the camera lens intersects the ground behind the camera.) On the other hand, a larger tilt angle, ?, will reduce ? so that y will never reach infinity, i.e. a ray corresponding to the top of the image will hit the ground.

Having calculated y, the x value corresponding to the u coordinate is:

x = y tan( ? (2u - m 1) / (m - 1) ) (3.6)

This is the distance along the x axis, which is the measured at right angles to the centre line of the robot and can therefore be positive or negative. Notice that x depends on both u and v because of the perspective transformation.

Given the four parameters (b, d, h and w), these calculations only need to be performed once for a given camera geometry, and a lookup table of corresponding (x,y) values can be constructed for all values of (u,v) in the image. (Note that the y value will approach infinity, so that above a certain v value it becomes irrelevant.) This makes re-mapping of image pixels for the inverse perspective mapping very quick, and the boundary line between the floor and the obstacles can easily be drawn onto a map using Cartesian coordinates with a known scale.

Similarly, a table can be constructed to map pixel locations into polar coordinates, (r,?). Mapping the obstacle boundary into this polar space produces the Radial Obstacle Profile, ROP. The advantage of the ROP as a representation is that rotations of the camera are equivalent to a linear sliding of the ROP to the left or right, making it easy to predict what the ROP should look like after the robot has rotated on the spot, e.g. as part of performing a 360o sweep.

3.4.2 Positioning in 2D case for 1 landmark

The coordinate system (X,Y) is depicted on the figure below. The local coordinate system of a robot [10] is (Xr,Yr), where (xr,yr) are coordinates of the mass center of robot and (xf,yf) are the coordinates of a color marker.

IMG_4a053db9-8379-48c9-b390-ae7a8805dd38

Figure 3.11 - Local and global coordinate systems for positioning in 2D in case for 1 landmark

For the nonlinear observability analysis it is convenient to introduce polar coordinates for the relative state [

IMG_bec609da-d08a-466b-a2fc-b9c94a1f5834 ],

IMG_06eba1f7-9e0e-4262-94ef-0996679abd68 (3.7)

IMG_2f597490-3591-4609-a5d6-7ba456d2b3e9

(3.8) where

IMG_b10cda49-5b07-45ea-b1bf-4e691bf8eb5c is the distance between the vehicle and a landmark and

IMG_c1d2cd4c-3866-43f6-81db-95c4cee7af9e is the bearing of the landmark with respect to the vehicle considering vehicle orientation,

IMG_0ae6f121-ed72-464e-9fda-d51081ca2237 is the angle between Xr and X. Figure 3.11 illustrates the vehicle landmark configuration in the global and in the relative frames.

3.4.3 Positioning in 2D case for 2 landmarks

The local coordinate system (X,Y) is depicted on the figure below [10]. Two markers with the coordinates (Xm1,Ym2) and (Xm2,Ym2) correspondingly and a robot with coordinates (Xr,Yr) are shown on the figure.

IMG_a5ceb5ef-1f84-4c48-9d11-e406f38f83d7

Figure 3.12 - Local and global coordinate systems for positioning in 2D in case for 2 landmarks

The following system of equations has been obtained:

IMG_1243a131-2361-4398-b481-09b93ccb5fbe (3.9)

If two markers are in the line of sight of the robot, the line of positions of the first and the second markers are intersected, then there are two possible positions of the robot. But one of the possible positions we can neglect. The distances from the robot to the first and the second marker are R1 and R2 correspondingly. The coordinates of the object are determined in local coordinate system. The line of position is a circle, the center of which is the coordinates of marker. It is possible to define the distance R from the detected marker. The given navigation system gets the data of the template of marker from the correlation extremal navigation system. Robot moves and tracks the environment. When the nearest marker is in the line of sight of the robot, it detects this marker. Robot defines which of the color markers from the database in its memory just have been detected. Knowing the color and the marker, the number of pixels of the color marker is calculated and the distance from the robot to the detected marker is determined.

4. Software development for visual SLAM system

4.1 Input data of program

It is necessary for the program the following input data: - real time video (30 fps, color image, format video - .avi, resolution 640Ч480);

4.2 Output data of program

The developed program detects distinctive objects of environment and determines distance from the robot to detected color marker.

Distance D has real type and the following format of representation: XX.XXXX.

4.3 Description of software block diagram

The description of the developed algorithm: 1. Video loading.

2. Image segmentation procedure.

3. Determination of the color of the marker.

4. Morphological analyses of the image.

5. Determination of geometric parameters of color marker.

6. Noise filtering.

7. Determination of the distance to the detected color marker.

8. Output of the distance to the detected color marker.

IMG_4f45985a-8b22-485a-8bef-87461b38b145

Figure 4.1 - Software block diagram

4.4 User manual

There are the minimal hardware and software requirements for the given project: 1. CPU = 1.666Hz

2. Memory = 128Mb

3. Videocard = 64Mb

4. Monitor (17")

5. Web-camera (robot onboard)

6. Operating system = Windows 2000/XP

7. LPT-port must be present and work in computer.

To run the program it is necessary to load file "Vis_marker_system.exe".

IMG_348b1ac8-920b-4505-ad4b-e1b46eb039c5

Figure 4.2 - User’s manual of the program

Interface of programs consists of several elements: - Button "Start camera";

- Area "Primary image obtained by web camera";

- Area "Distance to object determination by means of web camera";

- Area "Distance calculations";

- Area "Histogram of color distribution in the current image";

- Buttons, which correspond to the color of the color marker.

The button "Start camera" is used for loading the real time video.

Area "Primary image obtained by web camera" represents the input image and the results of real time video processing and is placed at the left lower corner of the program interface.

Area "Distance to object determination by means of web camera" is used for reflection of results, obtained after image processing.

Area "Distance calculations" shows the distance to the color marker (red, green or yellow) in cm and the robot coordinates.

Buttons, which correspond to the color of the marker, are used to display the color of the image, which was distinguished by the system.

4.5 Test example

1. Run the program "Vis_marker_system.exe" from the folder "Prog".

2.Click on the button "Start camera" to display the real time video. The picture of real time video appears at the left lower corner and segmented images - at the right top of the program interface. If the color marker is detected, the corresponding processed image will appear at the top of the program interface, and the distance to the color marker will be displayed in the box, located in the area "Distance calculations". The buttons will display the color of the image, which was distinguished by the system. In the right bottom angle will appear a histogram of color distribution in the current image.

IMG_b354b663-3d6f-4896-9777-39fb95fb0a04

Figure 4.3 - Test example of the programModern methods and means of navigation for the autonomous mobile robot have been analyzed in the given work.

Matching to distinctive objects of the surrounding environment (including visual features) is used in different spheres, for example at solution of tasks of simultaneous localization and mapping, known as SLAM (Simultaneous Localization And Mapping). The goal of this work was to analyze different methods and means of visual positioning and to create a system for autonomous mobile robot, which would be able to separate distinctive objects of the environment, determine distance to these objects and position itself, relatively to these objects. Under distinctive objects we understand objects, which have certain color and geometrical characteristics.

This task has been realized using the color segmentation of current image, which is based on L*a*b color model. In order to make the analyses of geometric features of separated objects, binary (black-white) image processing methods were applied. Also, calibration of camera was made with the purpose to determine distance to color markers. . Sets of experiments have been carried out in order to determine the factors which influence on color index and correspondingly on accuracy of segmentation. The results are the next: primary step before image segmentation has to be determination of colors, above threshold value, i.e. colors, which are dominantly present at the current image, and based on this fact perform further image segmentation procedure.

Results of the given task have proved the required accuracy and good reliability.

The developed algorithm can be used in the different branches of the manufacture. It reduces the time of work performance, provides simplicity and increases the reliability.

?aciauaii ia
Заказать написание новой работы



Дисциплины научных работ



Хотите, перезвоним вам?