Todd Sullivan, Nuwan I. Senaratna, Lawrence McAfee
Course Project for Stanford's CS 221 Artificial Intelligence: Principles & Techniques
Stanford Department of Computer Science

LittleDog was my course project for the breadth artificial intelligence course for the AI specialization in Stanford's CS Master's program. Students had the option of pursuing a computer vision or robotics challenge problem, or creating their own project. I led a group of three in the robotics challenge problem, which was called LittleDog. The objective of the LittleDog project was to find a sequence of footsteps that a robot dog can execute to successfully walk across a terrain to a given goal. We worked in a simulator that used the Open Dynamics Engine to simulate a robot dog and a randomly created terrain.

The simulator allowed for random terrains of varying difficulty (zero to three). A level zero terrain is flat, while a level three terrain is covered with obstacles. Pages seven and eight of our technical report show images of level three terrains. Our final program is capable of consistently navigating all terrain difficulties.

Technical Report

The Competition

Each of the challenge problems culminated in a competition at the end of the quarter. Of the 14 other teams participating in the LittleDog challenge, only one was able to consistently navigate level three terrains. Two other teams were able to navigate some level three terrains.

As described in Section 3.5.2 of the technical report, we trained the wrong step classifier for the competition. We did not discover the error until hours before the program submission deadline and did not have time to train the proper step classifier for all four feet. As a result, our competition submission contains a step classifier that did not take the slope of the terrain into account when predicting if a step is viable.

While we did not have enough time to train the proper classifier for the competition, we were able to train the classifier and demonstrate better results for the technical report (which was due 24 hours after the competition submission deadline). In one extreme case of a level three terrain, our competition submission took two hours and forty-seven minutes to find a successful path while our final program with the proper classifier took exactly sixteen minutes. Despite a significantly subpar step classifier (under certain conditions) for the competition, we placed second out of fifteen teams.

Member Contributions

My group simultaneously worked on the LittleDog project for CS 221 and GARI. Thus we did not necessarily divide our tasks evenly across each project separately. To gauge contributions relative to each team member one needs to view the breakdown for both projects. The following list details all group contributions on the LittleDog project as they pertain to the sections in the technical report.

  • I solely developed the base system, problem abstraction, and successor generator describe in Section 2 and its subsections.
  • I designed and implemented the base system for the step classifiers in Section 3 including doesItAll_LC in Section 3.1.
  • Lawrence and I codeveloped the logistic classifier training functionality.
  • We collectively came up with potential features for the step classifiers.
  • I solely developed the data collection, feedback loop, parsing, and cross validation functionality described in Section 3.2
  • I solely developed the logistic classifier feature picker described in Section 3.4.1 and 3.4.2.
  • Lawrence solely developed the Naive Bayes classifier described in Section 3.5.1 and created the SVM classifier described in Section 3.5.2 that wraps around the LIBSVM library.
  • I executed all tests of the SVM classifier with various parameters as described in Section 2.5.2.
  • We collectively evaluated the results of the previously mentioned tests to determine the best parameters for the SVM classifier as described in Section 3.5.2.
  • Nuwan came up with and designed the hardwired classifier described in Section 3.6 and I implemented the classifier within our system.
  • Nuwan and I codeveloped the high level search described in Section 4 with the following exceptions:
    • Lawrence and I codeveloped the within-square goal placement functionality of the high level search described in the third paragraph of Section 4.1.
    • I solely developed the visualization/evaluation techniques in Section 4.2.
  • We collectively used the visualization/evaluation techniques in Section 4.2 to determine the optimal parameters for our high level search.
  • We collectively developed the turning functionality described in Section 5.
    • Nuwan came up with the original idea and design for the turning functionality.
    • Lawrence designed and implemented the functionality to estimate the direction the dog is facing and to use the direction to determine the angle between the dog's direction and the goal.
    • I implemented the turning search functionality and integrated the turning search to work seamlessly within the general walking search functionality.
  • I solely developed the reporting functionality that creates log files of terrain runs as described in Section 6.
  • Lawrence developed the ZStat class described in Section 6 that parses a log file of terrain runs and produces statistics pertaining to the program's performance.
  • I developed the optimizations described in Section 7.
  • I was the sole writer and formatter of the technical report.
  • I was the primary editor of the technical report and Nuwan was the secondary editor.
  • We collectively participated in several group programming sessions to solve peculiar bugs at various points in development.

Source Code

I cannot release the source code at this time because the professor plans to use the LittleDog challenge problem in future CS 221 offerings.