Machine Learning: Classification
Beschreibung
When you enroll for courses through Coursera you get to choose for a paid plan or for a free plan .
- Free plan: No certicification and/or audit only. You will have access to all course materials except graded items.
- Paid plan: Commit to earning a Certificate—it's a trusted, shareable way to showcase your new skills.
About this course: Case Studies: Analyzing Sentiment & Loan Default Prediction In our case study on analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...). In our second case study for this course, loan default prediction, you will tackle financial data, and predict when a loan is likely to be risky or safe for the bank. These tasks are an examples of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification. In this course, you will create classi…

Frequently asked questions
Es wurden noch keine FAQ hinterlegt. Falls Sie Fragen haben oder Unterstützung benötigen, kontaktieren Sie unseren Kundenservice. Wir helfen gerne weiter!
When you enroll for courses through Coursera you get to choose for a paid plan or for a free plan .
- Free plan: No certicification and/or audit only. You will have access to all course materials except graded items.
- Paid plan: Commit to earning a Certificate—it's a trusted, shareable way to showcase your new skills.
About this course: Case Studies: Analyzing Sentiment & Loan Default Prediction In our case study on analyzing sentiment, you will create models that predict a class (positive/negative sentiment) from input features (text of the reviews, user profile information,...). In our second case study for this course, loan default prediction, you will tackle financial data, and predict when a loan is likely to be risky or safe for the bank. These tasks are an examples of classification, one of the most widely used areas of machine learning, with a broad array of applications, including ad targeting, spam detection, medical diagnosis and image classification. In this course, you will create classifiers that provide state-of-the-art performance on a variety of tasks. You will become familiar with the most successful techniques, which are most widely used in practice, including logistic regression, decision trees and boosting. In addition, you will be able to design and implement the underlying algorithms that can learn these models at scale, using stochastic gradient ascent. You will implement these technique on real-world, large-scale machine learning tasks. You will also address significant tasks you will face in real-world applications of ML, including handling missing data and measuring precision and recall to evaluate a classifier. This course is hands-on, action-packed, and full of visualizations and illustrations of how these techniques will behave on real data. We've also included optional content in every module, covering advanced topics for those who want to go even deeper! Learning Objectives: By the end of this course, you will be able to: -Describe the input and output of a classification model. -Tackle both binary and multiclass classification problems. -Implement a logistic regression model for large-scale classification. -Create a non-linear model using decision trees. -Improve the performance of any model using boosting. -Scale your methods with stochastic gradient ascent. -Describe the underlying decision boundaries. -Build a classification model to predict sentiment in a product review dataset. -Analyze financial data to predict loan defaults. -Use techniques for handling missing data. -Evaluate your models using precision-recall metrics. -Implement these techniques in Python (or in the language of your choice, though Python is highly recommended).
Created by: University of Washington-
Taught by: Carlos Guestrin, Amazon Professor of Machine Learning
Computer Science and Engineering -
Taught by: Emily Fox, Amazon Professor of Machine Learning
Statistics
每門課程都像是一本互動的教科書,具有預先錄製的視頻、測驗和項目。
來自同學的幫助與其他成千上萬的學生相聯繫,對想法進行辯論,討論課程材料,並尋求幫助來掌握概念。
證書獲得正式認證的作業,並與朋友、同事和雇主分享您的成功。
University of Washington Founded in 1861, the University of Washington is one of the oldest state-supported institutions of higher education on the West Coast and is one of the preeminent research universities in the world.Syllabus
WEEK 1
Welcome!
Classification is one of the most widely used techniques in machine learning, with a broad array of applications, including sentiment analysis, ad targeting, spam detection, risk assessment, medical diagnosis and image classification. The core goal of classification is to predict a category or class y from some inputs x. Through this course, you will become familiar with the fundamental models and algorithms used in classification, as well as a number of core machine learning concepts. Rather than covering all aspects of classification, you will focus on a few core techniques, which are widely used in the real-world to get state-of-the-art performance. By following our hands-on approach, you will implement your own algorithms on multiple real-world tasks, and deeply grasp the core techniques needed to be successful with these approaches in practice. This introduction to the course provides you with an overview of the topics we will cover and the background knowledge and resources we assume you have.
8 videos, 2 readings expand
- 閱讀: Slides presented in this module
- Video: Welcome to the classification course, a part of the Machine Learning Specialization
- Video: What is this course about?
- Video: Impact of classification
- Video: Course overview
- Video: Outline of first half of course
- Video: Outline of second half of course
- Video: Assumed background
- Video: Let's get started!
- 閱讀: Reading: Software tools you'll need
Linear Classifiers & Logistic Regression
Linear classifiers are amongst the most practical classification methods. For example, in our sentiment analysis case-study, a linear classifier associates a coefficient with the counts of each word in the sentence. In this module, you will become proficient in this type of representation. You will focus on a particularly useful type of linear classifier called logistic regression, which, in addition to allowing you to predict a class, provides a probability associated with the prediction. These probabilities are extremely useful, since they provide a degree of confidence in the predictions. In this module, you will also be able to construct features from categorical inputs, and to tackle classification problems with more than two class (multiclass problems). You will examine the results of these techniques on a real-world product sentiment analysis task.
18 videos, 2 readings expand
- 閱讀: Slides presented in this module
- Video: Linear classifiers: A motivating example
- Video: Intuition behind linear classifiers
- Video: Decision boundaries
- Video: Linear classifier model
- Video: Effect of coefficient values on decision boundary
- Video: Using features of the inputs
- Video: Predicting class probabilities
- Video: Review of basics of probabilities
- Video: Review of basics of conditional probabilities
- Video: Using probabilities in classification
- Video: Predicting class probabilities with (generalized) linear models
- Video: The sigmoid (or logistic) link function
- Video: Logistic regression model
- Video: Effect of coefficient values on predicted probabilities
- Video: Overview of learning logistic regression models
- Video: Encoding categorical inputs
- Video: Multiclass classification with 1 versus all
- Video: Recap of logistic regression classifier
- 閱讀: Predicting sentiment from product reviews
Graded: Linear Classifiers & Logistic Regression
Graded: Predicting sentiment from product reviews
WEEK 2
Learning Linear Classifiers
Once familiar with linear classifiers and logistic regression, you can now dive in and write your first learning algorithm for classification. In particular, you will use gradient ascent to learn the coefficients of your classifier from data. You first will need to define the quality metric for these tasks using an approach called maximum likelihood estimation (MLE). You will also become familiar with a simple technique for selecting the step size for gradient ascent. An optional, advanced part of this module will cover the derivation of the gradient for logistic regression. You will implement your own learning algorithm for logistic regression from scratch, and use it to learn a sentiment analysis classifier.
18 videos, 2 readings expand
- 閱讀: Slides presented in this module
- Video: Goal: Learning parameters of logistic regression
- Video: Intuition behind maximum likelihood estimation
- Video: Data likelihood
- Video: Finding best linear classifier with gradient ascent
- Video: Review of gradient ascent
- Video: Learning algorithm for logistic regression
- Video: Example of computing derivative for logistic regression
- Video: Interpreting derivative for logistic regression
- Video: Summary of gradient ascent for logistic regression
- Video: Choosing step size
- Video: Careful with step sizes that are too large
- Video: Rule of thumb for choosing step size
- Video: (VERY OPTIONAL) Deriving gradient of logistic regression: Log trick
- Video: (VERY OPTIONAL) Expressing the log-likelihood
- Video: (VERY OPTIONAL) Deriving probability y=-1 given x
- Video: (VERY OPTIONAL) Rewriting the log likelihood into a simpler form
- Video: (VERY OPTIONAL) Deriving gradient of log likelihood
- Video: Recap of learning logistic regression classifiers
- 閱讀: Implementing logistic regression from scratch
Graded: Learning Linear Classifiers
Graded: Implementing logistic regression from scratch
Overfitting & Regularization in Logistic Regression
As we saw in the regression course, overfitting is perhaps the most significant challenge you will face as you apply machine learning approaches in practice. This challenge can be particularly significant for logistic regression, as you will discover in this module, since we not only risk getting an overly complex decision boundary, but your classifier can also become overly confident about the probabilities it predicts. In this module, you will investigate overfitting in classification in significant detail, and obtain broad practical insights from some interesting visualizations of the classifiers' outputs. You will then add a regularization term to your optimization to mitigate overfitting. You will investigate both L2 regularization to penalize large coefficient values, and L1 regularization to obtain additional sparsity in the coefficients. Finally, you will modify your gradient ascent algorithm to learn regularized logistic regression classifiers. You will implement your own regularized logistic regression classifier from scratch, and investigate the impact of the L2 penalty on real-world sentiment analysis data.
13 videos, 2 readings expand
- 閱讀: Slides presented in this module
- Video: Evaluating a classifier
- Video: Review of overfitting in regression
- Video: Overfitting in classification
- Video: Visualizing overfitting with high-degree polynomial features
- Video: Overfitting in classifiers leads to overconfident predictions
- Video: Visualizing overconfident predictions
- Video: (OPTIONAL) Another perspecting on overfitting in logistic regression
- Video: Penalizing large coefficients to mitigate overfitting
- Video: L2 regularized logistic regression
- Video: Visualizing effect of L2 regularization in logistic regression
- Video: Learning L2 regularized logistic regression with gradient ascent
- Video: Sparse logistic regression with L1 regularization
- Video: Recap of overfitting & regularization in logistic regression
- 閱讀: Logistic Regression with L2 regularization
Graded: Overfitting & Regularization in Logistic Regression
Graded: Logistic Regression with L2 regularization
WEEK 3
Decision Trees
Along with linear classifiers, decision trees are amongst the most widely used classification techniques in the real world. This method is extremely intuitive, simple to implement and provides interpretable predictions. In this module, you will become familiar with the core decision trees representation. You will then design a simple, recursive greedy algorithm to learn decision trees from data. Finally, you will extend this approach to deal with continuous inputs, a fundamental requirement for practical problems. In this module, you will investigate a brand new case-study in the financial sector: predicting the risk associated with a bank loan. You will implement your own decision tree learning algorithm on real loan data.
13 videos, 3 readings expand
- 閱讀: Slides presented in this module
- Video: Predicting loan defaults with decision trees
- Video: Intuition behind decision trees
- Video: Task of learning decision trees from data
- Video: Recursive greedy algorithm
- Video: Learning a decision stump
- Video: Selecting best feature to split on
- Video: When to stop recursing
- Video: Making predictions with decision trees
- Video: Multiclass classification with decision trees
- Video: Threshold splits for continuous inputs
- Video: (OPTIONAL) Picking the best threshold to split on
- Video: Visualizing decision boundaries
- Video: Recap of decision trees
- 閱讀: Identifying safe loans with decision trees
- 閱讀: Implementing binary decision trees
Graded: Decision Trees
Graded: Identifying safe loans with decision trees
Graded: Implementing binary decision trees
WEEK 4
Preventing Overfitting in Decision Trees
Out of all machine learning techniques, decision trees are amongst the most prone to overfitting. No practical implementation is possible without including approaches that mitigate this challenge. In this module, through various visualizations and investigations, you will investigate why decision trees suffer from significant overfitting problems. Using the principle of Occam's razor, you will mitigate overfitting by learning simpler trees. At first, you will design algorithms that stop the learning process before the decision trees become overly complex. In an optional segment, you will design a very practical approach that learns an overly-complex tree, and then simplifies it with pruning. Your implementation will investigate the effect of these techniques on mitigating overfitting on our real-world loan data set.
8 videos, 2 readings expand
- 閱讀: Slides presented in this module
- Video: A review of overfitting
- Video: Overfitting in decision trees
- Video: Principle of Occam's razor: Learning simpler decision trees
- Video: Early stopping in learning decision trees
- Video: (OPTIONAL) Motivating pruning
- Video: (OPTIONAL) Pruning decision trees to avoid overfitting
- Video: (OPTIONAL) Tree pruning algorithm
- Video: Recap of overfitting and regularization in decision trees
- 閱讀: Decision Trees in Practice
Graded: Preventing Overfitting in Decision Trees
Graded: Decision Trees in Practice
Handling Missing Data
Real-world machine learning problems are fraught with missing data. That is, very often, some of the inputs are not observed for all data points. This challenge is very significant, happens in most cases, and needs to be addressed carefully to obtain great performance. And, this issue is rarely discussed in machine learning courses. In this module, you will tackle the missing data challenge head on. You will start with the two most basic techniques to convert a dataset with missing data into a clean dataset, namely skipping missing values and inputing missing values. In an advanced section, you will also design a modification of the decision tree learning algorithm that builds decisions about missing data right into the model. You will also explore these techniques in your real-data implementation.
6 videos, 1 reading expand
- 閱讀: Slides presented in this module
- Video: Challenge of missing data
- Video: Strategy 1: Purification by skipping missing data
- Video: Strategy 2: Purification by imputing missing data
- Video: Modifying decision trees to handle missing data
- Video: Feature split selection with missing data
- Video: Recap of handling missing data
Graded: Handling Missing Data
WEEK 5
Boosting
One of the most exciting theoretical questions that have been asked about machine learning is whether simple classifiers can be combined into a highly accurate ensemble. This question lead to the developing of boosting, one of the most important and practical techniques in machine learning today. This simple approach can boost the accuracy of any classifier, and is widely used in practice, e.g., it's used by more than half of the teams who win the Kaggle machine learning competitions. In this module, you will first define the ensemble classifier, where multiple models vote on the best prediction. You will then explore a boosting algorithm called AdaBoost, which provides a great approach for boosting classifiers. Through visualizations, you will become familiar with many of the practical aspects of this techniques. You will create your very own implementation of AdaBoost, from scratch, and use it to boost the performance of your loan risk predictor on real data.
13 videos, 3 readings expand
- 閱讀: Slides presented in this module
- Video: The boosting question
- Video: Ensemble classifiers
- Video: Boosting
- Video: AdaBoost overview
- Video: Weighted error
- Video: Computing coefficient of each ensemble component
- Video: Reweighing data to focus on mistakes
- Video: Normalizing weights
- Video: Example of AdaBoost in action
- Video: Learning boosted decision stumps with AdaBoost
- 閱讀: Exploring Ensemble Methods
- Video: The Boosting Theorem
- Video: Overfitting in boosting
- Video: Ensemble methods, impact of boosting & quick recap
- 閱讀: Boosting a decision stump
Graded: Exploring Ensemble Methods
Graded: Boosting
Graded: Boosting a decision stump
WEEK 6
Precision-Recall
In many real-world settings, accuracy or error are not the best quality metrics for classification. You will explore a case-study that significantly highlights this issue: using sentiment analysis to display positive reviews on a restaurant website. Instead of accuracy, you will define two metrics: precision and recall, which are widely used in real-world applications to measure the quality of classifiers. You will explore how the probabilities output by your classifier can be used to trade-off precision with recall, and dive into this spectrum, using precision-recall curves. In your hands-on implementation, you will compute these metrics with your learned classifier on real-world sentiment analysis data.
8 videos, 2 readings expand
- 閱讀: Slides presented in this module
- Video: Case-study where accuracy is not best metric for classification
- Video: What is good performance for a classifier?
- Video: Precision: Fraction of positive predictions that are actually positive
- Video: Recall: Fraction of positive data predicted to be positive
- Video: Precision-recall extremes
- Video: Trading off precision and recall
- Video: Precision-recall curve
- Video: Recap of precision-recall
- 閱讀: Exploring precision and recall
Graded: Precision-Recall
Graded: Exploring precision and recall
WEEK 7
Scaling to Huge Datasets & Online Learning
With the advent of the internet, the growth of social media, and the embedding of sensors in the world, the magnitudes of data that our machine learning algorithms must handle have grown tremendously over the last decade. This effect is sometimes called "Big Data". Thus, our learning algorithms must scale to bigger and bigger datasets. In this module, you will develop a small modification of gradient ascent called stochastic gradient, which provides significant speedups in the running time of our algorithms. This simple change can drastically improve scaling, but makes the algorithm less stable and harder to use in practice. In this module, you will investigate the practical techniques needed to make stochastic gradient viable, and to thus to obtain learning algorithms that scale to huge datasets. You will also address a new kind of machine learning problem, online learning, where the data streams in over time, and we must learn the coefficients as the data arrives. This task can also be solved with stochastic gradient. You will implement your very own stochastic gradient ascent algorithm for logistic regression from scratch, and evaluate it on sentiment analysis data.
16 videos, 2 readings expand
- 閱讀: Slides presented in this module
- Video: Gradient ascent won't scale to today's huge datasets
- Video: Timeline of scalable machine learning & stochastic gradient
- Video: Why gradient ascent won't scale
- Video: Stochastic gradient: Learning one data point at a time
- Video: Comparing gradient to stochastic gradient
- Video: Why would stochastic gradient ever work?
- Video: Convergence paths
- Video: Shuffle data before running stochastic gradient
- Video: Choosing step size
- Video: Don't trust last coefficients
- Video: (OPTIONAL) Learning from batches of data
- Video: (OPTIONAL) Measuring convergence
- Video: (OPTIONAL) Adding regularization
- Video: The online learning task
- Video: Using stochastic gradient for online learning
- Video: Scaling to huge datasets through parallelization & module recap
- 閱讀: Training Logistic Regression via Stochastic Gradient Ascent
Graded: Scaling to Huge Datasets & Online Learning
Graded: Training Logistic Regression via Stochastic Gradient Ascent
Werden Sie über neue Bewertungen benachrichtigt
Schreiben Sie eine Bewertung
Haben Sie Erfahrung mit diesem Kurs? Schreiben Sie jetzt eine Bewertung und helfen Sie Anderen dabei die richtige Weiterbildung zu wählen. Als Dankeschön spenden wir € 1,00 an Stiftung Edukans.Es wurden noch keine FAQ hinterlegt. Falls Sie Fragen haben oder Unterstützung benötigen, kontaktieren Sie unseren Kundenservice. Wir helfen gerne weiter!