Machine Learning with Python Online Course
About the Course
Do you ever want to be a data scientist and build Machine Learning projects that can solve real-life problems? If yes, then this course is perfect for you.
You will train machine learning algorithms to classify flowers, predict house price, identify handwritings or digits, identify staff that is most likely to leave prematurely, detect cancer cells and much more!
Inside the course, you'll learn how to:
- Set up a Python development environment correctly
- Gain complete machine learning toolsets to tackle most real-world problems
- Understand the various regression, classification and other ml algorithms performance metrics such as R-squared, MSE, accuracy, confusion matrix, prevision, recall, etc. and when to use them.
- Combine multiple models with by bagging, boosting or stacking
- Make use to unsupervised Machine Learning (ML) algorithms such as Hierarchical clustering, k-means clustering etc. to understand your data
- Develop in Jupyter (IPython) notebook, Spyder and various IDE
- Communicate visually and effectively with Matplotlib and Seaborn
- Engineer new features to improve algorithm predictions
- Make use of train/test, K-fold and Stratified K-fold cross-validation to select the correct model and predict model perform with unseen data
- Use SVM for handwriting recognition, and classification problems in general
- Use decision trees to predict staff attrition
- Apply the association rule to retail shopping datasets
- And much more!
- By the end of this course, you will have a Portfolio of 12 Machine Learning projects that will help you land your dream job or enable you to solve real-life problems in your business, job or personal life with Machine Learning algorithms.
Course Curriculum
Introduction
- What Does the Course Cover?
Getting Started with Anaconda
- [Windows OS] Downloading & Installing Anaconda
- [Windows OS] Managing Environment
- Navigating the Spyder & Jupyter Notebook Interface
- Downloading the IRIS Datasets
- Data Exploration and Analysis
- Presenting Your Data
Regression
- Introduction
- Categories of Machine Learning
- Working with Scikit-Learn
- Boston Housing Data - EDA
- Correlation Analysis and Feature Selection
- Simple Linear Regression Modelling with Boston Housing Data
- Robust Regression
- Evaluate Model Performance
- Multiple Regression with statsmodel
- Multiple Regression and Feature Importance
- Ordinary Least Square Regression and Gradient Descent
- Regularised Method for Regression
- Polynomial Regression
- Dealing with Non-linear relationships
- Feature Importance Revisited
- Data Pre-Processing 1
- Data Pre-Processing 2
- Variance Bias Trade Off - Validation Curve
- Variance Bias Trade Off - Learning Curve
- Cross Validation
Classification
- Introduction
- Logistic Regression 1
- Logistic Regression 2
- MNIST Project 1 - Introduction
- MNIST Project 2 - SGDClassifiers
- MNIST Project 3 - Performance Measures
- MNIST Project 4 - Confusion Matrix, Precision, Recall and F1 Score
- MNIST Project 5 - Precision and Recall Tradeoff
- MNIST Project 6 - The ROC Curve
Support Vector Machine (SVM)
- Introduction
- Support Vector Machine (SVM) Concepts
- Linear SVM Classification
- Polynomial Kernel
- Gaussian Radial Basis Function
- Support Vector Regression
- Advantages and Disadvantages of SVM
Tree
- Introduction
- What is Decision Tree
- Training a Decision Tree
- Visualising a Decision Trees
- Decision Tree Learning Algorithm
- Decision Tree Regression
- Overfitting and Grid Search
- Where to From Here
- Project HR - Loading and preprocesing data
- Project HR - Modelling
Ensemble Machine Learning
- Introduction
- Ensemble Learning Methods Introduction
- Bagging Part 1
- Bagging Part 2
- Random Forests
- Extra-Trees
- AdaBoost
- Gradient Boosting Machine
- XGBoost
- Project HR - Human Resources Analytics
- Ensemble of ensembles Part 1
- Ensemble of ensembles Part 2
k-Nearest Neighbours (kNN)
- kNN Introduction
- kNN Concepts
- kNN and Iris Dataset Demo
- Distance Metric
- Project Cancer Detection Part 1
- Project Cancer Detection Part 2
Dimensionality Reduction
- Introduction
- Dimensionality Reduction Concept
- PCA Introduction
- Dimensionality Reduction Demo
- Project Wine 1: Dimensionality Reduction with PCA
- Project Wine 2: Choosing the Number of Components
- Kernel PCA
- Kernel PCA Demo
- LDA & Comparison between LDA and PCA
Unsupervised Learning: Clustering
- Introduction
- Clustering Concepts
- MLextend
- Ward’s Agglomerative Hierarchical Clustering
- Truncating Dendrogram
- k-Means Clustering
- Elbow Method
- Silhouette Analysis
- Mean Shift