End of Semester Autonomous Controls update

Our autonomous controls and electrical subteam has been hard at work the past month on all stages of our autonomous system “brain”.  The “brain” is comprised of different stages to take in sensor input, compare it to previous observations, and then tell the car how to behave appropriately. Our team has also been active in supporting our combustion and electric teams as they gear up for their respective competitions. See below for pictures, videos, and a summary of what we’ve been up to!

While we are not competing until next Summer 2020 so don’t have a full vehicle to show off yet, on April 27th we attended unveiling for our combustion and electric teams to support them in displaying their vehicles to sponsors, parents, university faculty, and others interested in our Wisconsin Racing program. It was fantastic to see the cars put together after working side-by-side with the other teams throughout the year.

 

Unveiling was also a great opportunity for us to demonstrate the technology we’ve been working with and explain it to event attendees. We had our LiDAR on display and were able to show how some of our object recognition and path planning systems work. Shown here is our MRS-6124 LiDAR unit sponsored by SICK and our YOLOv3 image classifier model being applied to a video made public by a previous competitor.

We feel it is very important for others to understand how autonomous vehicles work so people can get a better idea of what to expect from future autonomous cars and make more educated decisions on using them in their own lives. For example, we explained how our car will be level 4 autonomy due to its full self driving capability but within an enclosed environment, and then elaborated on some of the differences between the levels of autonomy existing on the road today and where we see industry trends moving towards.

 

Object Recognition Algorithm Improvement

Initially, we began this year using a Haar Cascade Classifier with OpenCV because it was easy to implement, but ran into limitations due to its speed and accuracy of classification. While it was a good starting point, we needed something faster and more accurate, so we investigated the performance of existing methods.

We compared single stage (YOLO, SSD, etc.) and two stage detectors (Faster-RCNN, R-FCN, etc.) among other existing techniques. With computer vision, there is no universally “best” algorithm as weighing detection speed and accuracy is highly application dependent. Being an autonomous race car, we wanted an algorithm that was very fast. Additionally, since we know our course will only have blue and yellow cones on it, we are willing to sacrifice some accuracy because we believe it will be easy for even a lower accuracy classifier to tell the two apart.

 

This is a generalization of our findings on the performance trade-offs between different image classifiers. The latest YOLO version (v3) offers a significant accuracy and speed bump compared to previous YOLO versions due to the recent switch from DarkNet-19 to DarkNet-53 (19 to 53 convolutional layers). So, after researching multiple existing methods of computer vision and factoring in our application, we decided to pursue YOLOv3.

Training Data

In order to start training our image classifier, we needed lots of pictures of cones. We started taking our own and reached out to the FSOCO (Formula Student Objects in Context) database. FSOCO is an inter-team database with pre-labeled images of cones that asks for a 600 picture contribution to the data set. This saves time for all teams involved by making it easier and quicker to get reliable training data.

 

To label the cones we use labelImg, a tool used to label objects in training images. After loading an image a user simply selects “Create RectBox” and drags a rectangle of the entire object to be labeled (even if some of the object is blocked by another object in front of it). This box is then used by the YOLOv3 algorithm for training and ultimately detecting an object. In the image above we have already labeled the front blue and yellow cones with classes of one and zero, respectively, and are in the process of labeling the middle yellow cone.

MRS-6124 LiDAR (sponsored by SICK) Testing
We’ve been working to characterize the performance of our MRS-6124 LiDAR unit, sponsored to our team by SICK. The unit has 24 scan planes with 120deg horizontal and 15deg vertical field of views. In order to test the accuracy and assist with object detection we set up a cone in front of a wall at known distance from the LiDAR. We then visualized the output in ROS and measured the distances to where we knew the true wall and true cone locations were. After measuring the distances between scan planes to be ~0.06m, we measured error due to noise of ~0.12m in either direction (where there seemed to be roughly two scan planes in front of and behind the true locations). This matches up with the published systematic error of +/-0.125m in the user manual for the product.

Testing set up.

ROS output, red star and line mark where the wall is truly measured to be.

ROS output, red star marks where the cone is truly measured to be.

We will be continuing to test the performance of the unit for cones at different distances and angles, under different environmental conditions, and with different forms of motion applied to the unit itself (ex. on a moving platform). We are also in the process of analyzing the data to detect cones from the overall point cloud. We will likely be implementing ground-plane removal techniques as well to assist with detection methods.

LD-MRS LiDAR (sponsored by SICK) Testing

SICK sponsored us with a second LiDAR unit, the LD-MRS. Compared to the MRS-6124 this unit has a smaller field of view (110 horizontal, 6.5 vertical) but faster processing and a larger range (300m vs 200m). Shown is sample output visualized through ROS (via rviz).

 

The LD-MRS also comes with built-in object recognition, as the video below shows. This feature puts a bounding box and motion vector on regions the unit believes are connected. We are now working to test the limits of how closely objects (specifically cones) can be before they are lumped together. The results of that testing will help us make a decision between which LiDAR unit we will ultimately select for use on our car.

State Estimation/SLAM

Knowing where the cones are relative to the car and the car’s relative location as it moves through space is complex yet can be broken down to a SLAM problem.

SLAM attempts to make an estimation of the true state of a system by using measurement data and known equations modeling the system. Measurements include distances to cones as well as vehicle motion, such as through the Ellipse-2D GPS/IMU that was sponsored for us by SBG Systems. Combining this data allows us to describe the system itself and things around it, hence Simultaneous Localization and Mapping (SLAM). Throughout the year we’ve been researching existing techniques such as variations of Kalman Filters (standard, EKF, UKF), FastSLAM (a Rao-Blackwell particle filter approach) and GraphSLAM (a least squares optimization approach), among others.

To summarize our findings, Kalman filters are relatively well known compared to other methods and one of the least complex algorithms as far as SLAM is concerned. The Kalman Filter on its own struggles with non-linear systems, so solutions such as the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) have been formed to handle those cases. An EKF was the first proposal, and the UKF was formed later as a more accurate, faster solution. FastSLAM seems it will be a better long term option due to its increased accuracy and speed, a result of having lesser space complexity compared to Kalman Filters (O(NM)/O(NlogM) depending on implementation vs. Kalman Filter O(N^2)). However, FastSLAM uses an EKF/UKF hidden inside particle samples so we would need a working EKF/UKF first. GraphSLAM seems best for larger maps but as it is relatively very new and our environment will be smaller (relative to SLAM scale) we think it may be more work to implement than it would be worth for the time being.

After weighing the pros and cons to each method, along with the level of existing documentation and feasibility of implementation, we’ve decided to start with a UKF. It seems to be the easiest to make with while still being able to handle nonlinear systems (as we have with position based on acceleration through kinematic constraints) and does so more accurately than an EKF.

The main difference between the EKF and UKF is in how they analyze a transformed distribution. When a non-linear transformation is applied, the EKF linearizes the function before passing all points through the function and then re-evaluating the mean and variance to arrive at an estimate. The UKF selects a set of points (sigma points), passes them directly through the non-linear transformation, and then re-evaluates the mean and variance based on those points. The images below help to illustrate the resulting differences.

Resulting mean estimate from EKF. While a better estimate of the nonlinear transform compared to the stand alone Kalman Filter, the predicted mean (red triangle) is still far off the true mean (blue star).

Resulting mean estimate from UKF. The red dots show the sigma points chosen to represent the distribution through the nonlinear transformation, and the improved estimation accuracy compared to the EKF is clear.

We found these images from a github that does an excellent job explaining how the UKF works. It also explains more generally how Kalman Filters work, too.

From our research, the UKF is a very accurate yet not too difficult solution to the SLAM problem. It appears to be a great starting point that can be scaled to FastSLAM eventually if needed. With there being lots of tutorials and papers written on the subject of Kalman Filters, we feel we have the best understanding of how they work and so expect to be better able to build an implementation for our needs.

We were able to build a simple Kalman Filter to accurately predict true x and y velocity data given noisy data, and are now working on upgrading it to a UKF.

MPC-based Path Planning

Since the beginning of the year we’ve been implementing and testing different path planning techniques to optimize the trade off between shortest distance and fastest time through a course. The relationship is a trade off because at times it is advantageous to take a longer race line in order to allow the car to maintain a higher speed. There are many ways of characterizing a “best” race line depending on the type of turn and conditions, so we focus generally on being tangent to the inside track boundary at the apex of the turn.

Once we were able to do this we wanted to improve on how we were finding the desired velocity at each point throughout the track. To assist with the vehicle dynamics needed for this we implemented an MPC-based algorithm.

The algorithm searches ahead a certain amount and, using information known about a certain amount behind it, finds an optimal trajectory to get the vehicle through the course as fast as possible. With this technique we are able to find the fastest path and associated velocity profile. This information will then be fed into a PID controller to create steering and acceleration requests to the rest of our car.

 

LiDAR and Mechanical Update

This past month we’ve been continuing to evaluate our mechanical systems in order to develop actuation techniques and sensor mounting strategies. We’ve also been testing LiDAR sensor output (visualized in ROS), looking into point cloud object detection algorithms, and learning how to format our data properly for those specific techniques. Additionally, our team has cleaned and re-organized our shop space and implemented new preventive safety measures to ensure work can be done on the car efficiently while ensuring a safe environment for all students.

 

Inside view of the motor we will be using – from Brammo Empulse motorcycle, donated by Polaris. Taken apart to inspect how mounting orientation could affect oil lubrication within the gear train.

 

Saved image of LiDAR output to ROS from a SICK MRS-6124 unit. This picture is of the room surrounding the sensor. There are 24 scan layers total with a horizontal FOV of 120° and vertical FOV of 15°.

Saved image of LiDAR output to ROS from a SICK MRS-6124 unit. In this view the sensor was placed on the ground in front of a set of six cones which can be seen in the center of the image.

Sample point cloud data echoed from ROS /cloud topic published by the sick_scan package (publicly available at http://wiki.ros.org/sick_scan). This data will be fed to an additional ROS node that will parse and convert the data to the right format (as needed) for our point cloud object detection algorithms.

Example of initial motor and servo testing for a RC car. PWM control code was written on a Raspberry Pi and used with the car’s existing ESC to control motor speed and servo angle (for steering). We’ll be using the RC car to experiment with basic trajectory-following operations using PID control. Our competition car will have very different actuation, but our hope is that the RC car will allow us to evaluate concept feasibility.

 

 

Autonomous Controls and Electrical Update

Since acquiring our object recognition sensors we’ve been working to calibrate them and start setting them up with ROS, the software framework we’ll be using to link the sensor outputs with our sensor fusion, object detection filtering, SLAM, path planning, and path following algorithms. We’re in the process of testing our PID control with an RC car while building our computational nodes for an EKF and SLAM.

Test output from one of our LiDAR units. We’re currently working on converting the raw hex data from the unit to distance/angle coordinate pairs.

Experimenting with our stereo camera. We’re currently calibrating our device for different lighting conditions and developing our object recognition pipeline to extract distance data more efficiently.

 

 

Optimization our path planning algorithms. Upper image is top down view showing a fastest time/distance path through a sample course. Lower image shows the velocity profiles along the path per rough vehicle dynamic calculations. Working to increase algorithm speed and characterize speed profiles more accurately.

Our current top level block diagram plan for the autonomous data pipeline. Sensors will pick up cone and car locations before feeding them into the NVIDIA. Output is desired steering, velocity and acceleration which the ECU will convert to actuation via PID control.

Our stereo camera feeding IMU directional data to a ROS topic. Visualized using rviz within ROS.

Welcome New Members!

Welcome new members!

At our new member orientation meeting this past Sunday we welcomed several new students to each of our sub teams. Especially as a first year team with lots of work ahead of us, it’s great to have that influx of help!

The team is fully student run with our Faculty Advisor Glenn Bower overseeing our work. Our team generally breaks down into leaders of different subteams that work with a group of students to make each subsystem of the car come together. Team members collaborate with each other, university professors, and industry contacts to develop their design projects. These projects vary between subteams but include powertrain CAD, actuation design (steering, braking, emergency stop system), chassis kinematics and loading, sensor calibration, object recognition, path planning, algorithm optimization, image processing, battery sizing, electrical schematic design, and more.

The team brings together students from different (primarily engineering and business) majors. For example, mechanical engineers are exposed to electrical design, electrical engineers see FEA first hand, business majors learn the engineering design process, and computer science students see how budgets and sponsor outreach are coordinated. This collaborative approach is only successful with all team members learning critical, out-of-the-classroom skills such as teamwork, communication, problem solving, resilience, and how to apply their coursework to real world challenges.

We are excited to have a new wave of motivated students join our team and are looking forward to the work we will accomplish together!

Chassis Work

Front Brake Rotors

We have been making great progress on the chassis systems for the car. Got all the tires mounted onto the wheels and assembled brake lines and calipers. We also started putting together the master cylinders and bias bar assembly. We are hoping to wrap up the major chassis system within the next week!