Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!


How I Organized a One-week University Course on Deep Learning
Artificial Intelligence   Data Science   Latest   Machine Learning

How I Organized a One-week University Course on Deep Learning

Last Updated on March 13, 2024 by Editorial Team

Author(s): Rodrigo da Motta C. Carvalho

Originally published on Towards AI.

A hot topic in Data Science is how to teach it. Therefore, if you are looking for ways to detangle this massive topic into clear fields, this article suits you as it presents my experience in organizing a 20-hour course on Deep Learning at USP, the best university in Latin America, and how I was able to successfully cover the main topics.

Image from the first day of the course. Image by the author.


The recency and the changing rate of topics turn Data Science, especially Artificial Intelligence, into a complicated field to teach, formulate syllabus and think of teaching approaches.

The course goal was to provide an introduction to Deep Learning, exploring the most important and relevant topics in the area, as well as an introduction to PyTorch, which is the main Deep Learning framework in academia. The course was aimed at people who already have prior knowledge of programming in the Python language, mainly manipulating Numpy arrays. Furthermore, basic knowledge of linear algebra and probability will be essential.

The course covered the following topics:

We’re going to talk more about how I selected the topics above. Each course day consisted of 4 hours. Therefore, starting with theoretical/conceptual class (1h45), coffee break (30 min), and hands-on (1h45) respectively.

How I built the course

I created a few steps to think about when I created the course. There is no hierarchy; the idea is to follow these steps temporally, even though each element is not independent of the others. I'm presenting the steps, and then I'm going to describe each one of them more.

  • Understand your motivation
  • Know your public
  • Decide a well-defined scope
  • Make the topics talk among themselves
  • Filter the right person for each topic
  • Balance theory and hands-on
  • The Real deal

Understand your motivation

I’m truly inspired by education and knowledge transfer. There is nothing better than students being excited because they understand a concept. This is important because the other steps follow your motivation. For example, if the idea is to teach Deep Learning to a Data Science team of a company, I would change the structure of the course.

Know your public

When designing the course, we need to understand really well who the students are. This will define the syllabus, topic depth, communication techniques, and the context of the examples. Since the course was developed for university students, grad and undergrad students were interested. Therefore, this represents people interested in applying the concepts in their research and also to create a toolbox for the Data Science & AI market.

Forms result of education from the participants. Image by the author.

Even though all of them are University students, Data Science is inside a huge range of areas. Therefore, the students can have a variety of backgrounds. Then, we need to narrow the profile of these students. Firstly, to talk about Deep Learning, people must know the basics of Linear Algebra and Probability.

Student distribution of Linear Algebra and Probability skills. Image by the author.

Moreover, knowledge of Machine Learning is also required.

Student distribution of Machine Learning skills. Image by the author.

Finally, I am familiar with Python Data Science Frameworks.

Distribution of familiarity with Python libraries. Image by the author.

These results indicate that students have adequate mathematical concepts of linear algebra and probability but with insufficient machine learning models. This is a clue as to which topics each instructor should explore and focus more in class. For example, there is no necessity to describe Numpy operations and matrix multiplication, however, we should focus on evaluation metrics.

Decide a well-defined scope

Once we already know our motivation and student profile, we need to decide the range of subjects that will be discussed, including the topics that are relevant and essential to introduce deep learning and the main areas.

There is a macro-scope, which includes macro-topics such as Gradient Descent in Fundamentals, and a micro-scope, which consists of the subjects inside of macro-topics. For example, Gradient Descent is a part of the macro-scope, but the intricate math of momentum doesn’t. We also want to teach PyTorch, so it’s important to define a scope for the framework.

With that being said, these are the sub-topics I thought for each of the macro-areas covered:

  • Fundamentals

Machine Learning basics, principles of Multi-Layer Perceptron, Learning Rule, Gradient Descent, Hyper-parameters, Representation learning.

Hands-on: MLP Classification.

  • Computer Vision

Image structured data, Computer Vision tasks, Convolutional Operation, Filters, Auto-Encoder, Latent Space, Fine Tuning.

Hands-on: Implementing an Auto-Encoder.

  • Geometric Deep Learning

Graph Basics, Message Passing Algorithm, Graph Convolutional Layer, Self-Attention, Node & Graph classification.

Hands-on: Document classification.

  • Natural Language Processing (NLP)

Background, Tokens, TF-IDF, Embeddings, Word2Vec, CBOW, Skip Gram, BERT.

Hands-on: Sentiment Analysis.

  • Large Language Models (LLM)

Tokens, Embeddings, The curse of dimensionality, Cosine Similarity, RNN and LSTM, Self-Attention, Auto-encoder, Sentence transformers, LLMs Architecture, training LLMs.

Hands-on: Vector Databases.

Make the topics talk among themselves

Topics in Deep Learning talk to themselves frequently. It’s very common to see intersections between subjects all the time. Therefore, exploring this interaction is very important from a didactic point of view, where an idea is repeatedly recurrent over classes. But also important to show the methodology in a macro view rather than the subject micro view. Good ideas usually come from importing concepts from different areas to your problem. A great Data Scientist must know how to abstract a methodology from its obvious applications. For example, Transformers architecture is commonly attributed only to NLP, once it’s the most remarkable contribution. However, transformers are super useful for Computer Vision and probabilistic mapping of features in autoencoders. Another good example is recurrence, usually used in the context of time series forecasting, but useful for nearly all Deep Learning topics.

Filter the right person for each topic

Once the topics and the intersections between them are established, we can start selecting the right person for each topic. The trivial choice would be people with a wide portfolio or post-graduate experience in the topic. This is extremely important, once experience creates a depth of understanding about the subject that leads to the type of results that I’m looking for in a class.

For this specific course:

  • Fundamentals — Wed Braz (currently Manager at Artefact).
  • Computer Vision Roberta Duarte (currently PhD student at the University of São Paulo).
  • Geometric Deep Learning — Rodrigo da Motta C. Carvalho (myself) (currently a Data Scientist and MS Student in Computational Neuroscience).
  • Natural Language Processing Mariana Mercucci (currently Senior Data Scientist at Serasa Experian).
  • Large Language Models Pedro Vaz Pimenta (currently Senior Data Scientist at Willow Tree).

Balance theory and hands-on

Another important step is to balance theory to introduce concepts and practical experience. Theoretical concepts are essential to understanding and using deep learning models; however, as already discussed, experience brings a new depth of understanding of theoretical concepts. This is also central to solidifying the knowledge transfer during the course. Our brain benefits from practical experience (e.g., exercises or projects) to create important neuron connections and really learn hard concepts. The idea of the course is to do an application for each topic using an open-source dataset. The task will be illustrative and explored during the theoretical sections. This is very inspired by the Kaggle challenge and practical examples from framework documentation.

The Real deal

This is the most complicated part so far. I’ve talked a lot about practical and theoretical combinations. At this point there is just theory about steps and how to accomplish them. Nevertheless, the practice will always be different from the theory. Hence, we need to be ready to adjust differences during the course. For example, the concept of autoencoders and latent space was explored in the Computer Vision class. But these concepts are really hard to digest, and the students were quite confused at the end of the class. That way, I needed to change the macro-topic in the Geometric Deep Learning class in order to cover auto encoders and latent spaces, once these concepts are key.

What I learned

After the course was over, I was very happy with the results and really motivated by the students. I think that instructors learn way more with the students than studying to prepare for the course. The huge variety of backgrounds was enlightening, Deep Learning is a topic that crosses nearly all fields of knowledge at this moment. Therefore, the students interactions were very enriching and valuable by the multidisciplinary aspect and diversity of ways to see the same concepts. Moreover, the students were pro-activated and interested in the classes, and they also discussed and connected with each other.

In the end, some students gave me their feedback: This one-week course filled the gap that most DL courses fail to achieve: Good mathematical depth, balance between theory and practice, intersection between subjects, possibility to connect with other students and, my personal favourite, instructors who share real experiences.


The organization of a one-week university course on Deep Learning at USP makes evident the real challenge that lies in the dynamic of pedagogy, especially in Deep Learning, given the variety of student backgrounds and the recency and the changing rate of topics in Deep Learning. This requires adaptability and responsiveness to address student needs and comprehension levels. This article serves as a testament to addressing difficulties, such as course motivation, knowing the students backgrounds, defining a scope, topic selection, subject intersections, and balance between theory and practice in the realm of Data Science and Artificial Intelligence education.

Course material

The course classes are in Portuguese. However, the translation of the subtitles works well. Furthermore, there is a Google Drive with slides and hands-on exercises.


I would like to thank the Institute of Physics of the University of São Paulo for the infrastructure, the instructors for kindly offering their time and effort, and HackerSpace IFUSP, Victor Tsutsumiuchi and colleagues for the support and assistance.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓