Seminar Deep Learning with PyTorch/Tensorflow
Online seminar on 2 days: €1,190.00 per person (net)
Introduction to deep learning algorithms, especially in the field of image processing with a focus on supervised and semi-supervised learning!
LEARNING OBJECTIVES AND AGENDA
Goals:
Basics of PyTorch (and/or TensorFlow)
Hidden Layer : The concept of tensors
Network topologies : structure and components. Activation functions .
Other important layers in the network: Conv2D , Max - Pooling , etc.
Model training and evaluation .
Monitoring training with callbacks
Detecting and dealing with overfitting and different datasets (training, testing, validation)
Classification of images
Use of pre-trained networks ( fine-tuning, transfer learning )
Notes on using PyTorch for fine-tuning LLM models
Day 1
Basics
brief introduction to machine learning and artificial intelligence (AI)
Relationship between AI, Deep Learning and Machine Learning
Examples of deep learning algorithms in today's products
Data
Overfitting
Dividing the data into training, validation and test sets
One-hot encoding
Data normalization
Application to the MNIST dataset
MLP (Multi-Layer Perceptron)
Basic components of an MLP: Perceptron, weights, bias
Nonlinearities (activation functions)
Using Softmax for classification tasks
Training and application of a network
Various loss functions
Backpropagation: Training the weights
Initialization of the weights
Epoch and batch size
Interpretation of output during training
Using the trained network to predict new data
Day 2
Convolutional Neural Networks (CNNs)
Introduction to Convolutional Neural Networks (CNN)
Explanation of the convolution layer
Importance of filters
Using padding and stride in convolution
Number of channels and filters in the convolutional layer
Using max-pooling layers
Intervening in the .fit() function (callbacks)
Implementing a callback in Keras
Storage of model weights and architecture
Use of Early Stopping
Adjusting the learning rate with Learning Rate Scheduler
Visualization of the training process with MlFlow
Classification of images:
Using the Softmax layer for classification tasks
Cross-Entropy Loss Function
Use of regularization techniques such as L2 regularization, drop-out and batch normalization
Loading a trained model
Day 3 (optional third day)
Data management with TF.Data:
A typical data workflow with tf.data
Handling large data sets
Accelerating data reading
Semi-supervised learning (SSL):
Overview of Semi-Supervised Learning
Using the SimCLR model for semi-supervised learning
Creating a custom tf.keras model
Using Contrastive Loss
Best practices:
Approach to a new deep learning task
Hyperparameter optimization
Model optimization after training
Fine-tuning and pre-trained networks:
Introduction of other well-known network architectures such as Inception-V3 and ResNet
Finding (already trained) code for networks
Use of pre-trained networks and fine-tuning for your own tasks (transfer learning)
CONTENTS
The 2- or 3-day seminar "Deep Learning / AI with Tensorflow - Keras" offers an introduction to deep learning algorithms, particularly in the field of image processing, with a focus on supervised and semi-supervised learning. Deep learning algorithms are among the most important methods in machine learning and are already widely integrated into our everyday lives. During the seminar, participants will learn the training processes for creating suitable models for classification and estimation of new data. They will gradually learn the fundamental aspects of programming deep learning algorithms in Tensorflow/Keras. The course covers the preparation and sequential reading of large data sets during training, the creation of deep neural networks, various configurations, and the application of trained models to new data.
The seminar provides step-by-step instruction on programming deep learning algorithms in Tensorflow/Keras. Topics such as data preparation, sequentially reading large amounts of data during training, building deep neural networks, various training configurations, and applying trained models to new data are covered in detail.
The seminar includes a detailed discussion of common variants of deep neural networks and their components. The content is conveyed using presentation slides and flip charts and reinforced through practical exercises.
The algorithms discussed are widely used in industries such as:
Symbol recognition (e.g. numbers and letters)
Monitoring of production processes for error detection and wear analysis of components
Analysis of textures and surfaces
Automatic tagging of images to support text-based image search
The most common types of artificial neural networks are theoretically covered and their components discussed. These include the multi-layer perceptron (MLP) for general-purpose tasks, the convolutional neural network (CNN) for image processing, and SimCLR for semi-supervised learning. In practical exercises using Python and the Keras/Tensorflow framework, these networks are implemented using powerful GPUs.
This workshop will provide a step-by-step introduction to the essential aspects of implementing deep learning algorithms using the Keras library in Tensorflow. Topics such as data preparation, sequentially reading large datasets during training, network construction, various training configurations, and applying trained models to new data will be covered.
Python is the most widely used language in deep learning, and Keras/Tensorflow are among the most popular libraries for easily implementing deep learning algorithms.
The seminar will cover use cases of supervised learning, particularly image classification, as well as semi-supervised training and transfer learning with limited data. Participants will learn about the performance of the algorithms and gain approaches to solving typical training problems, such as regularization.
Participants will design simple neural networks with different layers and implement them in Python using Keras/Tensorflow in the cloud with Jupyter Notebooks. The training will provide the fundamentals so that, upon completion, participants will be able to program deep learning artificial intelligence algorithms, explore further use cases independently, and apply what they have learned to their own problems.



