Machine Learning: Clustering & Retrieval
Beschreibung
When you enroll for courses through Coursera you get to choose for a paid plan or for a free plan .
- Free plan: No certicification and/or audit only. You will have access to all course materials except graded items.
- Paid plan: Commit to earning a Certificate—it's a trusted, shareable way to showcase your new skills.
About this course: Case Studies: Finding Similar Documents A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover? In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, inc…

Frequently asked questions
Es wurden noch keine FAQ hinterlegt. Falls Sie Fragen haben oder Unterstützung benötigen, kontaktieren Sie unseren Kundenservice. Wir helfen gerne weiter!
When you enroll for courses through Coursera you get to choose for a paid plan or for a free plan .
- Free plan: No certicification and/or audit only. You will have access to all course materials except graded items.
- Paid plan: Commit to earning a Certificate—it's a trusted, shareable way to showcase your new skills.
About this course: Case Studies: Finding Similar Documents A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover? In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce. Learning Outcomes: By the end of this course, you will be able to: -Create a document retrieval system using k-nearest neighbors. -Identify various similarity metrics for text data. -Reduce computations in k-nearest neighbor search by using KD-trees. -Produce approximate nearest neighbors using locality sensitive hashing. -Compare and contrast supervised and unsupervised learning tasks. -Cluster documents by topic using k-means. -Describe how to parallelize k-means using MapReduce. -Examine probabilistic clustering approaches using mixtures models. -Fit a mixture of Gaussian model using expectation maximization (EM). -Perform mixed membership modeling using latent Dirichlet allocation (LDA). -Describe the steps of a Gibbs sampler and how to use its output to draw inferences. -Compare and contrast initialization techniques for non-convex optimization objectives. -Implement these techniques in Python.
Created by: University of Washington-
Taught by: Emily Fox, Amazon Professor of Machine Learning
Statistics -
Taught by: Carlos Guestrin, Amazon Professor of Machine Learning
Computer Science and Engineering
Each course is like an interactive textbook, featuring pre-recorded videos, quizzes and projects.
Help from your peersConnect with thousands of other learners and debate ideas, discuss course material, and get help mastering concepts.
CertificatesEarn official recognition for your work, and share your success with friends, colleagues, and employers.
University of Washington Founded in 1861, the University of Washington is one of the oldest state-supported institutions of higher education on the West Coast and is one of the preeminent research universities in the world.Syllabus
WEEK 1
Welcome
Clustering and retrieval are some of the most high-impact machine learning tools out there. Retrieval is used in almost every applications and device we interact with, like in providing a set of products related to one a shopper is currently considering, or a list of people you might want to connect with on a social media platform. Clustering can be used to aid retrieval, but is a more broadly useful tool for automatically discovering structure in data, like uncovering groups of similar patients.<p>This introduction to the course provides you with an overview of the topics we will cover and the background knowledge and resources we assume you have.
4 videos, 3 readings expand
- Reading: Slides presented in this module
- Video: Welcome and introduction to clustering and retrieval tasks
- Video: Course overview
- Video: Module-by-module topics covered
- Video: Assumed background
- Reading: Software tools you'll need for this course
- Reading: A big week ahead!
WEEK 2
Nearest Neighbor Search
We start the course by considering a retrieval task of fetching a document similar to one someone is currently reading. We cast this problem as one of nearest neighbor search, which is a concept we have seen in the Foundations and Regression courses. However, here, you will take a deep dive into two critical components of the algorithms: the data representation and metric for measuring similarity between pairs of datapoints. You will examine the computational burden of the naive nearest neighbor search algorithm, and instead implement scalable alternatives using KD-trees for handling large datasets and locality sensitive hashing (LSH) for providing approximate nearest neighbors, even in high-dimensional spaces. You will explore all of these ideas on a Wikipedia dataset, comparing and contrasting the impact of the various choices you can make on the nearest neighbor results produced.
22 videos, 4 readings expand
- Reading: Slides presented in this module
- Video: Retrieval as k-nearest neighbor search
- Video: 1-NN algorithm
- Video: k-NN algorithm
- Video: Document representation
- Video: Distance metrics: Euclidean and scaled Euclidean
- Video: Writing (scaled) Euclidean distance using (weighted) inner products
- Video: Distance metrics: Cosine similarity
- Video: To normalize or not and other distance considerations
- Reading: Choosing features and metrics for nearest neighbor search
- Video: Complexity of brute force search
- Video: KD-tree representation
- Video: NN search with KD-trees
- Video: Complexity of NN search with KD-trees
- Video: Visualizing scaling behavior of KD-trees
- Video: Approximate k-NN search using KD-trees
- Reading: (OPTIONAL) A worked-out example for KD-trees
- Video: Limitations of KD-trees
- Video: LSH as an alternative to KD-trees
- Video: Using random lines to partition points
- Video: Defining more bins
- Video: Searching neighboring bins
- Video: LSH in higher dimensions
- Video: (OPTIONAL) Improving efficiency through multiple tables
- Reading: Implementing Locality Sensitive Hashing from scratch
- Video: A brief recap
Graded: Representations and metrics
Graded: Choosing features and metrics for nearest neighbor search
Graded: KD-trees
Graded: Locality Sensitive Hashing
Graded: Implementing Locality Sensitive Hashing from scratch
WEEK 3
Clustering with k-means
In clustering, our goal is to group the datapoints in our dataset into disjoint sets. Motivated by our document analysis case study, you will use clustering to discover thematic groups of articles by "topic". These topics are not provided in this unsupervised learning task; rather, the idea is to output such cluster labels that can be post-facto associated with known topics like "Science", "World News", etc. Even without such post-facto labels, you will examine how the clustering output can provide insights into the relationships between datapoints in the dataset. The first clustering algorithm you will implement is k-means, which is the most widely used clustering algorithm out there. To scale up k-means, you will learn about the general MapReduce framework for parallelizing and distributing computations, and then how the iterates of k-means can utilize this framework. You will show that k-means can provide an interpretable grouping of Wikipedia articles when appropriately tuned.
13 videos, 2 readings expand
- Reading: Slides presented in this module
- Video: The goal of clustering
- Video: An unsupervised task
- Video: Hope for unsupervised learning, and some challenge cases
- Video: The k-means algorithm
- Video: k-means as coordinate descent
- Video: Smart initialization via k-means++
- Video: Assessing the quality and choosing the number of clusters
- Reading: Clustering text data with k-means
- Video: Motivating MapReduce
- Video: The general MapReduce abstraction
- Video: MapReduce execution overview and combiners
- Video: MapReduce for k-means
- Video: Other applications of clustering
- Video: A brief recap
Graded: k-means
Graded: Clustering text data with K-means
Graded: MapReduce for k-means
WEEK 4
Mixture Models
In k-means, observations are each hard-assigned to a single cluster, and these assignments are based just on the cluster centers, rather than also incorporating shape information. In our second module on clustering, you will perform probabilistic model-based clustering that provides (1) a more descriptive notion of a "cluster" and (2) accounts for uncertainty in assignments of datapoints to clusters via "soft assignments". You will explore and implement a broadly useful algorithm called expectation maximization (EM) for inferring these soft assignments, as well as the model parameters. To gain intuition, you will first consider a visually appealing image clustering task. You will then cluster Wikipedia articles, handling the high-dimensionality of the tf-idf document representation considered.
15 videos, 4 readings expand
- Reading: Slides presented in this module
- Video: Motiving probabilistic clustering models
- Video: Aggregating over unknown classes in an image dataset
- Video: Univariate Gaussian distributions
- Video: Bivariate and multivariate Gaussians
- Video: Mixture of Gaussians
- Video: Interpreting the mixture of Gaussian terms
- Video: Scaling mixtures of Gaussians for document clustering
- Video: Computing soft assignments from known cluster parameters
- Video: (OPTIONAL) Responsibilities as Bayes' rule
- Video: Estimating cluster parameters from known cluster assignments
- Video: Estimating cluster parameters from soft assignments
- Video: EM iterates in equations and pictures
- Video: Convergence, initialization, and overfitting of EM
- Video: Relationship to k-means
- Reading: (OPTIONAL) A worked-out example for EM
- Video: A brief recap
- Reading: Implementing EM for Gaussian mixtures
- Reading: Clustering text data with Gaussian mixtures
Graded: EM for Gaussian mixtures
Graded: Implementing EM for Gaussian mixtures
Graded: Clustering text data with Gaussian mixtures
WEEK 5
Mixed Membership Modeling via Latent Dirichlet Allocation
The clustering model inherently assumes that data divide into disjoint sets, e.g., documents by topic. But, often our data objects are better described via memberships in a collection of sets, e.g., multiple topics. In our fourth module, you will explore latent Dirichlet allocation (LDA) as an example of such a mixed membership model particularly useful in document analysis. You will interpret the output of LDA, and various ways the output can be utilized, like as a set of learned document features. The mixed membership modeling ideas you learn about through LDA for document analysis carry over to many other interesting models and applications, like social network models where people have multiple affiliations.<p>Throughout this module, we introduce aspects of Bayesian modeling and a Bayesian inference algorithm called Gibbs sampling. You will be able to implement a Gibbs sampler for LDA by the end of the module.
12 videos, 2 readings expand
- Reading: Slides presented in this module
- Video: Mixed membership models for documents
- Video: An alternative document clustering model
- Video: Components of latent Dirichlet allocation model
- Video: Goal of LDA inference
- Video: The need for Bayesian inference
- Video: Gibbs sampling from 10,000 feet
- Video: A standard Gibbs sampler for LDA
- Video: What is collapsed Gibbs sampling?
- Video: A worked example for LDA: Initial setup
- Video: A worked example for LDA: Deriving the resampling distribution
- Video: Using the output of collapsed Gibbs sampling
- Video: A brief recap
- Reading: Modeling text topics with Latent Dirichlet Allocation
Graded: Latent Dirichlet Allocation
Graded: Learning LDA model via Gibbs sampling
Graded: Modeling text topics with Latent Dirichlet Allocation
WEEK 6
Hierarchical Clustering & Closing Remarks
In the conclusion of the course, we will recap what we have covered. This represents both techniques specific to clustering and retrieval, as well as foundational machine learning concepts that are more broadly useful.<p>We provide a quick tour into an alternative clustering approach called hierarchical clustering, which you will experiment with on the Wikipedia dataset. Following this exploration, we discuss how clustering-type ideas can be applied in other areas like segmenting time series. We then briefly outline some important clustering and retrieval ideas that we did not cover in this course.<p> We conclude with an overview of what's in store for you in the rest of the specialization.
12 videos, 2 readings expand
- Reading: Slides presented in this module
- Video: Module 1 recap
- Video: Module 2 recap
- Video: Module 3 recap
- Video: Module 4 recap
- Video: Why hierarchical clustering?
- Video: Divisive clustering
- Video: Agglomerative clustering
- Video: The dendrogram
- Video: Agglomerative clustering details
- Video: Hidden Markov models
- Reading: Modeling text data with a hierarchy of clusters
- Video: What we didn't cover
- Video: Thank you!
Graded: Modeling text data with a hierarchy of clusters
Werden Sie über neue Bewertungen benachrichtigt
Schreiben Sie eine Bewertung
Haben Sie Erfahrung mit diesem Kurs? Schreiben Sie jetzt eine Bewertung und helfen Sie Anderen dabei die richtige Weiterbildung zu wählen. Als Dankeschön spenden wir € 1,00 an Stiftung Edukans.Es wurden noch keine FAQ hinterlegt. Falls Sie Fragen haben oder Unterstützung benötigen, kontaktieren Sie unseren Kundenservice. Wir helfen gerne weiter!