Location:Home > Engineering science > Computer Science > Computer technology > Indoor Three Layers Mapping and Navigation System Based on Visual-Voice Interaction
Details
Name

Indoor Three Layers Mapping and Navigation System Based on Visual-Voice Interaction

Downloads: []
Author
Tutor: PanWei
School: Xiamen University
Course: Computer technology
Keywords: Skeleton Tracking,Voice Marking,Layer Mapping and Navigation,ROS
CLC: TP242
Type: Master's thesis
Year:  2014
Facebook Google+ Email Gmail Evernote LinkedIn Twitter Addthis

not access Image Error Other errors

Abstract:
In this paper, using a Pioneer-3AT with Kinect and SICK LMS100sensors as hardware and ROS as software, we create a grid-topological-semantic three layers mapping and navigation system based on Skeleton Tracking and Voice Marking interaction.This system mainly contains four modules:Interaction module, Control module, Mapping module and Navigation module. In interaction module, our system uses Skeleton Tracking based on Kinect as visual interaction way and Voice Marking based on IFLYTEC cloud interface as voice interaction way. The robot gets the position information of user through Skeleton Tracking and records the names of places which have special meanings for human through IFLYTEC interface APP In control module, the robot converts the position of user to the linear velocity and angular velocity of robot according to a certain functional relation, which makes the robot can follow the user steadily. In mapping module, our system build a grid-topological-semantic three layers map. The robot builds a real-time grid map based on GMapping algorithm through laser and odometry data in the process of follow. Defining a topological node through associating semantic name with corresponding coordinate value, the robot can build a topological map through augmented algorithm. Finally the robot builds a semantic map through extracting the semantic name from topological nodes. The robot understands environment through the bottom grid map and the user understands the environment through the top semantic map. The topological map in the middle which associates semantic name with coordinate value builds a bridge between the top layer and bottom layer. In the navigation module, firstly the user utters the name of destination node using cellphone APP Then, after finding this name in the semantic layer, the robot locates the corresponding topological node and extracts its coordinate value in the topological layer. Finally the robot sends the coordinate value to grid map layer, transforms grid map to cost-map and makes a path planning through A*algorithm. Because of adding human intelligence, interactive mapping not only increases the efficiency of mapping, but also can add the semantic information to the map. The grid-topological-semantic map can decrease the cognitive gap between human and robot efficiently through building a bridge between the cognitive way of human-semantic concepts and robot-sensor data. In addition, because we build our system in the ROS frame and modular programming, our system can be easily transplanted and extended. We also make a lot of experiments and testing in different environment. The result shows that interactive grid-topological-semantic map can efficiently combine semantic information and environment information to complete the mapping and navigation tasks. We hope our system can be applied in the service robot at home, guiding robot in the museum and intelligent wheelchair for the disabled to improve the quality of our daily life.
Related Dissertations
Last updated
Sponsored Links
Home |About Us| Contact Us| Feedback| Privacy | copyright | Back to top