Course Review: CS 7641 Machine Learning @Georgia Tech

I just completed CS 7641: Machine Learning course. It was my 4th course in the Georgia Tech OMSCS (Online Master of Science in Computer Science) program, and fortunately, I could earn the letter grade A!

It was an exciting journey, but it required a lot of effort to complete the assignments and to study for the exams. I’m gonna explain what I’ve done and what I thought about this course.

1. Course Topics

This ML class contains wide topics throughout the Machine Learning field. It includes:

  • Supervised Learning
    • Decision Trees (ID3)
    • Neural Networks
    • Regression and Classification
    • Ensemble Learning (Boosting)
    • Instance-Based Learning (kNN)
    • Kernel Methods and SVMs (Kernel Trick)
    • Computational Learning Theory (PAC Learnable)
    • VC Dimensions
    • Bayesian Learning, Bayesian Inference
  • Unsupervised Learning
    • Randomized Optimization (Hill Climbing, Simulated Annealing, Genetic Algorithms, MIMIC)
    • Clustering (K-Means, EM, etc)
    • Feature Transformation (PCA, ICA, RCA, LDA, etc)
    • Information Theory
  • Reinforcement Learning
    • Markov Decision Process (Bellman Equation, Value Iteration, Policy Iteration)
    • Reinforcement Learning (Q-Learning)
    • Game Theory (Nash Equilibrium, Mixed Strategy, IPD, Minimax Q, Nash Q)

2. Assignments & Exams

It has a total of 4 assignments and 2 exams. All of the assignments required a lot of work. They were very tough and time-consuming. I can’t explain in detail, but the projects covered the main topics.

The key to the success in these assignments was an analysis. It is necessary to add many graphs and plots to visualize what’s happening. I needed to explain why it went well, why it didn’t work, why I chose this criterion, and how I define the convergence, etc.

Also, it is very important to implement the codes that work as expected. I used libraries such as scikit-learn, ABAGAIL, and OpenAI Gym. But the core logic must be implemented on my own, which was good training to implement the actual working examples.

I think I spent more than 60 hours on each project. I spent all of my weekends for these assignments and exams due to the deadlines that came every 2 to 3 weeks. But, overall, I could deepen my understandings very much through these projects.

3. Pros / Cons

Pros:

  • Can study and understand wide topics about Machine Learning
  • Well-formed lectures
  • The professor and TAs are helpful and responsible for the questions (on Piazza, Slack, etc)
  • Office-Hour in every week

Cons:

  • Assignments are very tough
  • Time-consuming
  • Grading of the projects are unstable under unclear rubrics

4. Further Studying

I could study the overall foundations of Machine Learning by this class. For further studying, it is necessary to fill the gap between the academic foundations and the real-world applications. To do that, I think it’s a good idea to try Kaggle competition to apply the ML algorithms to the actual dataset.

Also, the lectures didn’t cover modern deep learning (DL) and state-of-the-art papers. Therefore, I recommend studying DL from the other lectures (such as those published by Stanford).

Personally, I found it fun to challenge reinforcement learning problems. Some of those problems are the simplified version of the real problems, such as taxi problems, frozen lake problems. I felt it was kind of game on how to model the problem with the probability and the rewards. Given the model, the agent learns how to decide the action, and it tries to maximize its rewards. It is exciting when the agent learns the correct policy after the training.

I think everyone who completed this course has obtained a solid understanding about ML! I pointed out a few cons about this course, but I definitely recommend this course. Thanks.

by @takp