Welcome to Our Robotic Pet Project!
The Robotic Pet is an automated robot that can mimic common pets behaviours, like entertaining with a ball autonomously and interacting with human gesture freely. As the final project of EE 125 Introduction to Robotics class, the robotic pet is built on the UC Berkeley TETRIX robot platform.
Introduction
Our robotic pet project aims to bring happiness and potential convenience to people's everyday life. Because robots do not have minds or ideas, how to make them "live" like real animals is a quite interesting topic for both of us.
An ideal robotic pet can follow the pet owner like a servant with real time interaction with the owner. The robotic pet should also resemble a real pet and bring enjoyment to its owner. In our project, we specifically investigated on implementing two major functionalities for our robotic pet: not only the feature of self-entertaining with a ball, but also owner hand gesture comprehension. So in the future, one can possibly use the robot as an extra carrier to lift heavy package, which would be especially beneficial to elder civilians.
Design
Functionality
- Self-entertaining with a ball
- Automatic following with pet's owner
- Owner's hand gesture comprehension
Design Criteria
- 1. Stable self-entertaining mode to mimic a pet
- 2. Effective hand gesture comprehension and interaction
- 3. Interactive with pet's owner when possible
- 4. Reduce communication lagging and response time
- 5. Durable, robust and safe for users
Computer Vision
- a) Data streaming: Because the board on the robot does not have enough calculating power, we decide to put all heavy computations on our laptop. So the first task is to stream the data from the robot camera to our laptop. The package "raspicam_node" really helped us on adjusting the framerate and quality of the image so that the laptop can receive the data stream in real time with little lagging.
- b) Our second task is to do camera calibration. By computing the homography matrix H, we are able to map the 2-dimensional pixel position of the digital camera into the actual floor coordinate (x, y). The robot now can process the object position in the ground coordinates accordingly.
- c) Now we can visualize the ball and the input gesture. This is done by filtering out the environment pixels and using the python cv functions to process the target contours and to locate the centre of the target. If other environment noise presents, we should find the largest bounding box for the contours and to indicate the location. If the largest contour is small enough, then we can say the target is not present in our image. Lab 6 "Object tracking" in the course gives us the initial idea. We built upon lab 6, designed separate mode for the hand gesture and the ball, implemented the bounding box area criteria for checking the presence of the object, and designed the appropriate parameters for the Kalman Filter for the linear motion prediction. Using ROS publisher, predicted location and the actual location of target object is published and waiting for further processing.
Control Signal Generation
- a) For the ball: Our goal is to let the robotic pet entertain the ball by itself. When the image processing publisher tells us that there is no object present, we should let the pet rotate to find the new ball position until the ball come into its vision again. Then, using feedback control technique, the pet adjusts its facing angle until it aligns itself to the ball. Then by several short forward movements, it further adjusts its angle gradually. Finally, it will accelerate with full speed in order to hit the ball. The ball is then kicked away and the robot will search it again and repeat the previous process. Our controller code does this job and publishes velocity control signal remotely to the robot.
- b) For hand gesture: The pet should be able to get the hand position and keep a certain distance with the hand. When the hand is far away, it should be able to catch up with the hand; and when the hand is shaking left or right, the pet should rotate according to the order and direction.
Materializing Velocity Signal
With the I2C tool, we are able to feed the speed information from the Raspberry Pi board to the DC motor. On our Raspberry Pi board, we subscribe the velocity command from the ROS Master node, and send binary signal to the predetermined pins on our board. Finally the electric signal will be received by the left and the right motor and they will perform as desired.
Implementation
Hardware
- Raspberry Pi with camera
- Raspberry Pi extension board
- Tetrix Robot frames and wheels
- Tetrix DC motor and 12V Battery
Software
- ROS Hydro & Groovy
- Debian Wheezy
Flow Chart
For Self-entertaining with the ball:
For owner hand gesture comprehension:
Results
We achieved success in the robotic pet project after several progresses. Though we were unable to further add in more human gesture comprehension utilizing more trajectory points and path planning, our basic functionality of a robotic pet worked well as designed.
Conclusion
- Our project was quite successful, in consistent with our project goals and designs. However, we did encounter several difficulties and challenges throughout the project development.
- First of all, as designed in our project proposal, we planned to use Kinect to realize the function of gesture cognition. Yet after installing various libraries and dependencies on Raspberry Pi board (like OpenNI etc.), it turned out that this path is totally blocked due to the incompatibility between Kinect and Debian Wheezy + ROS Groovy. On the other hand, leaving heavy computation on a single Raspberry Pi board with very limited computing resources is very unwise. After consulting with course GSIs, we changed our implementation method into distributed system with real time communication utilizing Raspberry Pi camera on-board in order to overcome the difficulties. In detail, we defined one node as ROS Master (our laptop) and the other node as the TETRIX robot. With the help of the wireless network router, the TETRIX robot node will only be responsible for image streaming as well as listening and executing the motor commands. Meanwhile, the master node will be in charge of all the other computations and action planning.
-
This turned out to be very effective, however, a second difficulty came out immediately, which is the huge lagging in back and forth
communication between the two nodes. This dilemma made the TETRIX robot nearly impossible to track the target and perform real-time,
prompted action according to the fast-changing situations. To solve that issue, we explored various resources online and discovered
a quite functional Raspberry Pi package named
raspicam_node
. With that, we would be able to experiment and adjust various image streaming frame rate and individual image quality, yielding a efficient usage of the limited bandwidth and nicely solved the lagging issue. At the same time, to optimize the performance of publisher and subscriber together with downstream calculations and action planning, we set each individual publisher and subscriberqueuesize = 1
. Therefore, there is no longer a huge stack of information/commands which are due to limited bandwidth and long executing time existing in he system. Hence, they will not overwhelm and confuse the system any more.
- Future work can be done in adding more features to the robotic pet, refining the human hand gesture comprehension as well as optimizing the TETRIX motor control and smoothing the robot movements. These will surely be interesting topics to keep working on.
Team
I am a third year Electrical Engineering & Computer Science student at UC Berkeley. I like to playing piano and video games. This is a very enjoyable class on robotics and it's very exciting to see the robot works out after days and nights of working and debugging. I have learned a lot here on both the theory and the applications. You can contact me by shooting me an email.
I am a graduate student in the Department of Electrical Engineering & Computer Science at UC Berkeley. My interests include Robotics, Embedded Software and Machine Learning. I did my undergraduate study in Computer Engineering with some researches on FPGA and embedded systems. This is a very fun class for me to take and the class project would surely be a valuable experience for my future engineering career. You are welcome to check out my LinkedIn page or contact me through email!
Additional Materials
All the project source codes are available on Github, with download links on top of the page. Please feel free to check them out!
References
• Department of Electrical Engineering and Computer Sciences. Aaron Bestick and Austin Buchan. EE C125: Introduction to Robotics Lab Menu. University of California, Berkeley. 2014.