2022-Winter-DSC190-Introduction to Data Mining

Undergraduate Class, HDSI, UCSD, 2022

Class Time: Tuesdays and Thursdays, 8AM to 9:20AM. Room: PETER 104 https://ucsd.zoom.us/j/97017584161. Piazza: piazza.com/ucsd/winter2022/dsc190a00

Online Lecturing

Due to the COVID-19 Omicron Variant, in the first two weeks, this course will be delivered over Zoom: https://ucsd.zoom.us/j/97017584161

Overview

This course mainly focuses on introducing current methods and models that are useful in analyzing and mining real-world data. It will cover frequent pattern mining, regression & classification, clustering, and representation learning. No previous background in machine learning is required, but all participants should be comfortable with programming, and with basic optimization and linear algebra.

There is no textbook required, but here are some recommended readings:

Prerequisites

Math, Stats, and Coding: (CSE 12 or DSC 40B) and (CSE 15L or DSC 80) and (CSE 103 or ECE 109 or MATH 181A or ECON 120A or MATH 183)

TAs and Tutors

  • Teaching Assistants: Dheeraj Mekala (dmekala AT eng.ucsd.edu)

Office Hours

Note: all times are in Pacific Time.

Grading

  • Homework: 8% each. Your lowest (of four) homework grades is dropped (or one homework can be skipped).
  • Midterm: 26%.
  • Data Mining Challenge: 25%.
  • Project: 25%.
  • You should complete all work individually, except for the Project.
  • Late submissions are NOT accepted.

Lecture Schedule

Recording Note: Please download the recording video for the full length. Dropbox website will only show you the first one hour.

HW Note: All HWs due before the lecture time 8:00 AM PT in the morning.

WeekDateTopic & SlidesEvents
101/04 (Tue)Introduction: Data Types, Tasks, and EvaluationsHW1 out
101/06 (Thu)Supervised - Least-Squares Regression and Logistic Regression 
201/11 (Tue)Supervised - Overfitting and RegularizationHW1 Due, HW2 out
201/13 (Thu)Supervised - Support Vector Machine 
301/18 (Tue)Supervised - Naive Bayes and Decision Tree 
301/20 (Thu)Supervised - Ensemble Learning: Bagging and Boosting 
401/25 (Tue)Cluster Analysis - K-Means Clustering & its VariantsHW2 Due, HW3 out
401/27 (Thu)Cluster Analysis - “Soft” Clustering: Gaussian Mixture 
502/01 (Tue)Cluster Analysis - Density-based Clustering: DBSCAN 
502/03 (Thu)Cluster Analysis - Principle Component AnalysisDM Challenge out
602/08 (Tue)Pattern Analysis - Frequent Pattern and Association Rules 
602/10 (Thu)Midterm (24 hours on this date) 
702/15 (Tue)Recommender System - Collaborative FilteringHW3 Due, HW4 out
702/17 (Thu)Recommender System - Latent Factor Models 
802/22 (Tue)Text Mining - Zipf’s Law, Bags-of-words, and TF-IDF 
802/24 (Thu)Text Mining - Advanced Text RepresentationsDM Challenge due
903/01 (Tue)Network Mining - Small-Worlds & Random Graph Models 
903/03 (Thu)Network Mining - HITS, PageRank, Personalized PageRank and Node Embedding 
1003/08 (Tue)Sequence Mining - Sliding Windows and Autoregression 
1003/10 (Thu)Text Data as Sequence - Named Entity RecognitionHW4 Due

Homework (24%)

Your lowest (of four) homework grades is dropped (or one homework can be skipped).

  • HW1: Concepts and Evaluations (8%). This homework mainly focuses on the data mining concepts and how to evaluate different tasks.
  • HW2: Regression and Classification (8%). This homework mainly focuses on regression and classification tasks.
  • HW3: Cluster and Pattern Analysis (8%). This homework mainly focuses on clustering methods and frequent pattern mining methods.
  • HW4: Applications (8%). This homework mainly focuses on recommender system, text mining, and network mining.

Midterm (26%)

It is an open-book, take-home exam, which covers all lectures given before the Midterm. Most of the questions will be open-ended. Some of them might be slightly more difficult than homework. You will have 24 hours to complete the midterm, which is expected for about 2 hours.

  • Start: Feb 11, 9:30 AM PT
  • End: Feb 12, 9:30 AM PT
  • Midterm problems download: TBD
  • Please make your submissions on Gradescope.

Data Mining Challenge (25%)

It is a individual-based data mining competition with quantitative evaluation. The challenge runs from Feb 3 0:00:01 AM to Feb 24 4:59:59 PM PT. Note that the time displayed on Kaggle is in UTC, not PT.

  • Challenge Statement, Dataset, and Details: TBD
  • Kaggle challenge link: TBD

Project (25%)

Instructions for both choices will be available soon. Project **due on Sunday, Mar 13 EOD**.

Here is a quick overview:

  • Choice 1: Team-Based Open-Ended Project
    • 1 to 4 members per team. More members, higher expectation.
    • Define your own research problem and justify its importance
    • Come up with your hypothesis and find some datasets for verification
    • Design your own models or try a large variety of existing models
    • Write a 4 to 8 pages report (research-paper like)
    • Submit your codes
    • Up to 5% bonus for working demos/apps towards the total course grade.
  • Choice 2: Individual-Based Deep Dive into Data Mining Methods
    • Implement a few models learned from this course from scratch.
    • Skeleton codes will be provided soon. Your work is more like “filling in blanks” following the TODOs outlined in the Jupyter-Notebook.
    • Each model has a point associated with it. 6 points required. Points for each model is available at the end of the instruction slides.
    • Write a report (pages based on points) describing your interesting findings.
    • Up to 5% bonus towards the total course grade. Roughly 1 point, 1%.