You are viewing the theme
[Voti: 0    Media Voto: 0/5]

One of the main obstacles that still hinder penetration of mobile robots into wide consumer markets is the unavailability of powerful, versatile and cheap sensing. Vision technology is potentially a clear winner as far as the ratio of information provided versus cost is considered: cameras of acceptable accuracy
are currently sold at a price which is one to two orders of magnitude less than laser scanners. As a consequence, much attention is being devoted to solving the non–trivial problems implied by using visual information for building maps, localizing and navigating through the environment.

This thesis deals with the problem of using off-the-shelf cameras fixed on inexpensive mobile platforms, to enable navigation and control to given goal configurations in space based on visual maps of the unknown environment, which are contextually built in the process. To this purpose, some powerful tools have been provided recently in the literature on localization and mapping for autonomous vehicles mainly by computer scientists’ techniques, and separately by research work on visual servoing of robots, coming mostly from an automatic control background. Our effort is mainly focused on the integration between advanced techniques of sensing and understanding the ambient (perceptions), and the necessity of making and implementing decisions (actions) based on real-time sensorial feedback from the environment.

To achieve autonomous navigation in a complex environment using primarily vision in conjunction with noisy odometry, the capability of servoing to a given specified image is clearly not enough. The system should also build and update maps of the environment, in terms that are both informative enough for control
tasks (allowing the vehicles to accurately reach arbitrarily desired positions in the environment), aware of resource limitations (such as memory space, or communication bandwidth for multiple robot systems) and useful also for human navigation. 

Although the Simultaneous Localization And Map building (or SLAM) is a well posed problem and a solution exists in the general case, very little has been done on the interactions between the exploring vehicle and the mapped environment, which is, on the contrary, the main topic of the presented thesis. Hence, the key point of this work is the attempt of unifying the map construction and the robot localization estimation problem
with the feedback design. In system-theoretic terms, this problem seems to appeal to a generalization of the separation principle of linear stochastic control, which unfortunately is a mere leap of faith in the context of robotic systems with highly nonlinear dynamics. Nevertheless, it is well known from the first steps moved towards a solution to the SLAM problem that even the navigation in a perfectly mapped environment increase the inaccuracy on the mobile robot localization. 

The system we are developing to address the problem of Visual-Based Simultaneous Localization And Mapping for Servoing (V-SLAMS) is comprised of several interconnected components. The final goal of our project is to have multiple mobile agents, equipped with cameras and possibly basic odometry, cooperatively build a visual map of the environment. The map allows any single vehicle to localize itself and navigate through the mapped environment to reach an arbitrary position, which may not have been reached in advance. The mapping and servoing phases should not necessarily be thought as consecutive in time.

In our proposed architecture, sensorial data organization starts with stochastic estimation of the three–dimensional coordinates of image features, and possibly of uncertain parameters in the camera and environment models, via Extended Kalman Filter techniques. Estimated values are merged into a general 3D feature map, useful for robot navigation and localization. The flexibility of the feature map allows a rather simple improvement of the extracted information using standard computer vision algorithms, like texture extraction and homography plane detection. A topological image-based map is also maintained in parallel to effectively connect the feature-based map and the topology of the surrounding ambient. In what follows, the merging of topology and metric maps is referred to as hybrid mapping. Once vehicles are localized with respect to any point of the map, cooperation is enabled by sharing the global feature-based map
and the image map, allowing robots to regroup in any position with respect the environment or relative vehicles position. Particular attention has been devoted in the implementation of the V-SLAMS architecture to produce maps whose requirements, in terms of memory allocation and communication bandwidth, are limited for sharing in multi–robot architecture.

Practical applications of mobile robot platforms involve a lot of different problems, mainly due to uncertainties related to the kinematic model of the system to control, to unmodelled dynamics or to nonlinearity of the actuators available (e.g. saturations or dead zones). An attempt to solve the control of nonholonomic mobile robots using nonlinear adaptive control in the presence of
uncertainties related to actuators or dynamic parameters, coping with limitations on actuators saturation, has been presented. The underlying idea of the proposed approach is that each component of the final control law can be modularly composed using Lyapunov functions, starting from a stabilizing controller thought for the kinematic, perfectly known model.

The study of the map involved in this thesis aims to identify what are the crucial characteristics for building a reliable map of the surroundings in real time. Memory safe representations allow map sharing among a team of exploring robots, preserving the obvious limitation on communication bandwidth. In general, the team of visually servoed robots has a variety of maneuvre abilities that depends on the dynamic model. To enlarge the set of possible mobile vehicles involved in the exploration, a stabilizing, point–to–point
motion controller is proposed for the bicycle like vehicle. The feedback control law has to take into account limited field–of—view constraint during the parking maneuvres, hence the proposed controller is based on the visual servoing control developed for the unicycle vehicle in order to inherit its convergence characteristics. For, a backstepping framework has been adopted.