I.The Basic Information of the Course(Time New Rome, 12 points, bold font)
Course Number:202012420022
The EnglishName of theCourse:Deep Learning Methods and Application
The ChineseName of theCourse:深度学习方法及应用
In-classHours andAllocation(Sample: total class hours:32, classroom teaching: 28classhours, classroom discussion:4classhours)
Credit(s):2
Semester:2
AppliedDiscipline(Professional Degree Category):Scienceand Technology
CourseObjectOriented:Master,Doctor
EvaluationMode:Sample:Process Evaluation, etc.
TeachingMethod:(Seminar-style Teaching, Case Teaching)
CourseOpening Department:
Science andEngineeringDiscipline.
II.Prerequisite Course( Time New Rome 12 points bold font)
Linear algebra, calculus, probability and statistics, computer programming
III. The Objectives and Requirements of the Course(Time New Rome 12 points, total words: about 200 words)
This course will comprehensively introduce the basic concepts, main structures, core methods and key applications of neural network-based deep learning technologies developed in recent years. The main content includes: (1) basic concepts and algorithms of machine learning and neural networks and the underlying foundation of probability theory, linear algebra, and optimization theory; (2) mainstream structure of deep learning, activation function, regularization technology, and practical algorithm details And application cases; (3) principles and applications of computer vision and natural language processing technology; (4) introduction of emerging technologies including model compression and generative confrontation network technology; (5) cutting-edge papers and technical discussions.
IV. The Content of the Course(Time New Rome, 12 points, bold font, 1000-2000 words)
Chapter 1 Basics of Machine Learning
Learning methods, capacity, overfitting and underfitting, hyperparameters and validation sets, estimates, deviations and variances, maximum likelihood estimation, Bayesian estimation, supervised learning methods, five supervised learning methods, stochastic gradient descent
Chapter 2 Deep Feedforward Network
Gradient-based learning, hidden units, architectural design, back propagation and other differential algorithms
Chapter 3 Regularization in Deep Learning
Parameter norm penalty, norm penalty as a constraint, regularization and under-constrained problems, data set enhancement, noise robustness, semi-supervised learning, multi-task learning, early termination, parameter binding and parameter sharing, sparse representation, Bagging And other representations, Dropout, adversarial training
Chapter 4 Optimization in Deep Model
Differences between learning and pure optimization, challenges in neural network optimization, basic algorithms, parameter initialization strategies, adaptive learning rate algorithms, second-order approximation methods, optimization strategies and meta-algorithms
Chapter 5 Convolutional Networks
Convolution operations, motivation, pooling, convolution and pooling as an infinitely strong prior, variants of basic convolution functions, structured output, data types, efficient convolution algorithms, random or unsupervised features , The neuroscience foundation of convolutional networks
Chapter 6 Sequence Modeling: Recurrent and Recursive Networks
Expand calculation graph, recurrent neural network, bidirectional RNN, deep recurrent network, recurrent neural network, long and short-term memory
V. Reference Books, Reference Literatures, and Reference Materials
1. Lan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016
2.Learn with Google AI(Machine Learning Crash Course)
3. Tutorials of Tensorflow/Pytorch
4. https://www.coursera.org/specializations/deep-learning
Outline Writer (Signature):
Leader in charge of teaching at the College (Signature):
Date: