Practical ML with Neural Networks Training Overview
This Comprehensive Machine Learning (ML) with Neural Networks training course teaches attendees the theory, concepts, and terminology of ML with neural networks. The course goes beyond the basics to explore the practical aspects of ML using TensorFlow and Keras, the Natural Language Processing (NLP) toolkit, and more, allowing participants to master their new skills through hands-on practice in a real-world setting.
Location and Pricing
Accelebrate offers instructor-led enterprise training for groups of 3 or more online or at your site. Most Accelebrate classes can be flexibly scheduled for your group, including delivery in half-day segments across a week or set of weeks. To receive a customized proposal and price quote for private corporate training on-site or online, please contact us.
In addition, some courses are available as live, instructor-led training from one of our partners.
Objectives
- Define machine learning and neural networks
- Learn about the different types of neural networks
- Understand the terminology and concepts of Machine Learning
- Use TensorFlow and Keras to build neural networks
- Apply neural networks to natural language processing tasks
- Use neural networks to solve real-world problems
Prerequisites
All students must have Python experience and a basic understanding of linear algebra and calculus.
Outline
Expand All | Collapse All
Machine Learning with Neural Networks
- Arthur C. Clarke’s 3rd Law
- What is Machine Learning?
- Terminology: Features and Targets
- Terminology: Observations (Examples)
- Supervised and Unsupervised ML
- “Classical” ML and ML with Neural Networks
- The Shared Concepts and Principles
- AI and Data Science
- What is a Neural Network?
- Network vs Model
- Positional Types of Layers
- Deep Learning
- How Does My Network Know Which Problem I Want It to Solve?
- The Desired Model Properties
- The Artificial Neuron
- Perceptron
- The Perceptron Symbol
- A Breakthrough in Neural Networks Design
- Perceptrons and MLPs
- A Basic Neural Network Example
- Popular Activation Functions
- Navigating Neural Networks Layers
- A Sample Neural Network Diagram
- Model Training
- Measuring the Error with the Loss (Cost) Function
- Loss Function Properties
- Mini-batches and Epochs
- Neural Networks Training Steps
- The Chain Rule in Calculus
- The Chain Rule in Neural Networks
- The Gradient Descent Formulation
- Applying Efficiencies with Autodiff ...
- Types of Neural Networks
- Convolutional Neural Networks (CNNs)
- Recurrent Neural Networks (RNNs)
- RNN Common and Unrolled Visual Representation
- Autoencoders
- Neural Network Libraries and Frameworks
- Ethical AI
- Then a Miracle Occurs ...
Machine Learning Concepts and Terminology
- Supervised and Unsupervised ML
- Self-Supervised Learning
- Terminology: Features and Targets
- Terminology: Observations (Examples)
- Notation for Observations
- Data Structures: Tensors
- Continuous and Categorical Features
- Continuous Features
- Categorical Features
- Feature Types Visually
- Common Distance Metrics
- The Euclidean Distance
- What is a Model?
- Model Parameters and Hyperparameters
- Model Accuracy
- Loss Functions
- Mean Squared Error (MSE)
- Mean Absolute Error (MAE)
- (Categorical) Cross Entropy Loss
- The Cross-Entropy Loss Visually
- The softmax Function
- Confusion Matrix
- The Binary Classification Confusion Matrix
- Multi-class Classification Confusion Matrix Example
- Feature Engineering
- Data Scaling and Normalization
- Bias-Variance (Underfitting vs. Overfitting) Trade-off
- Bias and Variance Visually
- Underfitting vs. Overfitting Visually
- Ways to Balance Off the Bias-Variance Ratio
- Regularization
- Dimensionality Reduction
- Model Validation and Avoiding Test Data Leakage
- Training Error vs. Validation Error Diagram
- Training/Validation/Test Data Split Ratios
- Online Glossaries
TensorFlow Introduction
- What is TensorFlow?
- The TensorFlow Logo
- Tensors and Python API
- Python TensorFlow Interfaces Diagram
- GPUs and TPUs
- Google Colab
- Data Tools
- TensorFlow Variants
- TensorFlow Core API
- TensorFlow Lite
- TFX (TensorFlow Extended)
- A TFX Pipeline Example
- XLA Optimization
- TensorFlow Toolkit Stack
- Keras
- TensorBoard
Introduction to Keras
- What is Keras?
- Core Keras Data Structures
- Layers in Keras
- The Dense Layer
- Defining the Layer Activation Function
- Models in Keras
- Components of a Keras Model
- Creating Neural Networks in Keras
- The Sequential Model
- A Sequential Model Code Example
- The Strengths and Weaknesses of Sequential Models
- The Functional API
- A Functional API Example
- The Strengths and Weaknesses of the Functional API
- Making New Layers and Models via Subclassing
- A Layer Subclassing Example
- A Model Subclassing Example
- The Strengths and Weaknesses of Subclassing
Neural Networks Best Practices
- "There ain't no such thing as a free lunch ..."
- The Number of Hidden Layers
- Number of Neurons on a Hidden Layer
- Optimizers
- The Batch Size
- Activation Functions
- Batch Normalization
- Regularization
- Dropout
- An Example of Using Utility Layers in Keras
- Early Stopping
Understanding NLU, NLP, and Text Mining
- The Age of Digital Assistants ...
- Defining NLU
- What is Natural Language Processing (NLP)?
- Common NLP Tasks
- What is Text Mining?
- Supervised vs Unsupervised Types of NLP
- AI, DL, ML, and NLP Diagram
- Machine Learning in NLP
- Popular ML Algorithms and Statistical Models
- NLP on Neural Networks
- A Sentiment Analysis Example
- A More Advanced Example
- Popular Text Mining and NLP Libraries and Technologies
- Large Language Models (LLMs)
- Transformers
- Self-Supervised Learning
- Transformer Variations
- An AI Translator Component Diagram
- Going to the Cloud ...
- Google Natural Language AI Cloud Service
- How Google NL Service Works
- Google Translate
- AWS Comprehend
- How Comprehend Works
- Comprehend in the AWS Management Console
- Comprehend Use Cases
- AWS Lex
- AWS Polly
- Polly's Text-to-Speech Dashboard
- Example of Using Polly's AWS CLI
- AWS Transcribe
- Azure NL Services
NLP Concepts and Terminology
- How Do Machines Understand Text?
- Getting the Text Data
- Text Formats
- Common Text Pre-Processing Activities
- Text Normalization
- The Stop Words
- Stemming
- Lemmatization
- The POS Tagging
- Text Corpus Vocabulary
- Documents as Vectors
- OOV Tokens
- The Bag of Words
- N-Grams
- TF-IDF
- The Feature Hashing Trick
- Cosine Similarity and Distance
- Limitations of BoW and TF-IDF Representation Schemes
- Word Embedding
- Creating Word Embeddings
- The Word2vec Model
- Gensim in Action (Bring in Your Own Protractor)
Conclusion
Training Materials
All ML with Neural Networks training students receive courseware covering the topics in the class.
Software Requirements
- Windows, Mac, or Linux
- A current version of Anaconda for Python 3.x
- Related lab files that Accelebrate will provide