
Title:
Deep learning approaches for security threats in IoT environments
Author:
Abdel-Basset, Mohamed, 1985- author.
ISBN:
9781119884163
9781119884156
9781119884170
Physical Description:
1 online resource : illustrations (chiefly color)
Contents:
Author Biography -- About the Companion Website -- 1. Chapter 1: INTRODUCING DEEP LEARNING FOR IoT SECURITY -- 1.1. Introduction -- 1.2. Internet of Things (IoT) Architectures -- 1.2.1. Physical layer -- 1.2.2. Network layer -- 1.2.3. Application Layer -- 1.3. Internet of Things Vulnerabilities and attacks -- 1.3.1. Passive attacks -- 1.3.2. Active attacks -- 1.4. Artificial Intelligence -- 1.5. Deep Learning -- 1.6. Taxonomy of Deep Learning Models -- 1.6.1. Supervision criterion -- 1.6.1.1. Supervised deep learning -- 1.6.1.2. Unsupervised deep learning. -- 1.6.1.3. Semi-supervised deep learning. -- 1.6.1.4. Deep reinforcement learning. -- 1.6.2. Incrementality criterion -- 1.6.2.1. Batch Learning -- 1.6.2.2. Online Learning -- 1.6.3. Generalization criterion -- 1.6.3.1. model-based learning -- 1.6.3.2. instance-based learning -- 1.7. Supplementary Materials -- 2. Chapter 2: Deep Neural Networks -- 2.1. Introduction -- 2.2. From Biological Neurons to Artificial Neurons -- 2.2.1. Biological Neurons -- 2.2.2. Artificial Neurons -- 2.3. Artificial Neural Network (ANN) -- 2.4. Activation Functions -- 2.4.1. Types of Activation -- 2.4.1.1. Binary Step Function -- 2.4.1.2. Linear Activation Function -- 2.4.1.3. Non-Linear Activation Functions -- -- 2.5. The Learning process of ANN -- 2.5.1. Forward Propagation -- 2.5.2. Backpropagation (Gradient Descent) -- 2.6. Loss Functions -- 2.6.1. Regression Loss Functions -- 2.6.1.1. Mean Absolute Error (MAE) Loss -- 2.6.1.2. Mean Squared Error (MSE) Loss -- 2.6.1.3. Huber Loss -- 2.6.1.4. Mean Bias Error (MBE) Loss -- 2.6.1.5. Mean Squared Logarithmic Error (MSLE) -- 2.6.2. Classification Loss Functions -- 2.6.2.1. Binary Cross Entropy (BCE) Loss -- 2.6.2.2. Categorical Cross Entropy (CCE) Loss -- 2.6.2.3. Hinge Loss -- 2.6.2.4. Kullback Leibler Divergence (KL) Loss -- 2.7. Supplementary Materials -- -- 3. Chapter 3: Training Deep Neural Networks -- 3.1. Introduction -- 3.2. Gradient Descent revisited -- 3.2.1. Gradient Descent -- 3.2.2. Stochastic Gradient Descent -- 3.2.3. Mini-batch Gradient Descent -- 3.2.4. -- 3.3. Gradients vanishing and exploding -- 3.4. Gradient Clipping -- 3.5. Parameter initialization -- 3.5.1. Random initialization -- 3.5.2. Lecun Initialization -- 3.5.3. Xavier initialization -- 3.5.4. Kaiming (He) initialization -- 3.6. Faster Optimizers -- 3.6.1. Momentum optimization -- 3.6.2. Nesterov Accelerated Gradient -- 3.6.3. AdaGrad -- 3.6.4. RMSProp -- 3.6.5. Adam optimizer -- 3.7. Model training issues -- 3.7.1. Bias -- 3.7.2. Variance -- 3.7.3. Overfitting issues -- 3.7.4. Underfitting issues -- 3.7.5. Model capacity -- 3.8. Supplementary Materials -- 4. Chapter 4: Evaluating Deep Neural Networks -- 4.1. Introduction -- 4.2. Validation dataset -- 4.3. Regularization methods -- 4.3.1. Early Stopping -- 4.3.2. L1 & L2 Regularization -- 4.3.3. Dropout -- 4.3.4. Max-Norm Regularization -- 4.3.5. Data Augmentation -- 4.4. Cross-Validation -- 4.4.1. Hold-out cross-validation -- 4.4.2. K-folds cross-validation -- 4.4.3. Repeated K-folds cross-validation -- 4.4.4. Leave-one-out cross-validation -- 4.4.5. Leave-p-out cross-validation -- 4.4.6. Time series cross-validation -- 4.4.7. Block cross-validation -- 4.5. Performance Metrics. -- 4.5.1. Regression Metrics -- 4.5.1.1. Mean Absolute Error (MAE) -- 4.5.1.2. Root Mean Squared Error (RMSE) -- 4.5.1.3. Coefficient of determination (R-Squared) -- 4.5.1.4. Adjusted R2 -- 4.5.1.5. -- 4.5.2. Classification Metrics -- 4.5.2.1. Confusion Matrix. -- 4.5.2.2. Accuracy -- 4.5.2.3. Precision -- 4.5.2.4. Recall -- 4.5.2.5. Precision-Recall Curve -- 4.5.2.6. F1-score -- 4.5.2.7. Beta F1-score -- 4.5.2.8. False Positive Rate (FPR) -- 4.5.2.9. Specificity -- 4.5.2.10. Receiving operating characteristics (ROC) curve -- 4.6. Supplementary Materials -- -- 5. Chapter 5 -- 5.1. Introduction -- 5.2. Shift from full connected to convolutional -- 5.3. Basic Architecture -- 5.3.1. The Cross-Correlation Operation -- 5.3.2. Convolution operation -- 5.3.3. Receptive Field -- 5.3.4. Padding and Stride -- 5.3.4.1. Padding -- 5.3.4.2. Stride -- 5.4. Multiple Channels -- 5.4.1. Multi-channel Inputs -- 5.4.2. Multi-channels Output -- 5.4.3. Convolutional kernel 1×1. -- 5.5. Pooling Layers -- 5.5.1. Max Pooling -- 5.5.2. Average Pooling -- 5.6. Normalization Layers -- 5.6.1. Batch Normalization -- 5.6.2. Layer Normalization -- 5.6.3. Instance Normalization -- 5.6.4. Group Normalization -- 5.6.5. Weight Normalization -- 5.7. Convolutional Neural Networks (LeNet) -- 5.8. Case studies -- 5.8.1. Handwritten Digit Classification (one channel input) -- 5.8.2. Dog vs Cat Image Classification (Multi-channel input) -- 5.9. Supplementary Materials -- 6. Chapter 6: Dive into Convolutional Neural Networks -- 6.1. Introduction -- 6.2. One-dimensional Convolutional Network -- 6.2.1. One-dimensional Convolution -- 6.2.2. One-dimensional pooling -- 6.3. Three-dimensional Convolutional Network -- 6.3.1. Three-dimension convolution -- 6.3.2. Three-dimensional pooling -- 6.4. Transposed Convolution Layer -- 6.5. Atrous/Dilated Convolution -- 6.6. Separable Convolutions -- 6.6.1. Spatially Separable Convolutions -- 6.6.2. Depth-wise Separable (DS) Convolutions -- 6.7. Grouped Convolution -- 6.8. Shuffled Grouped Convolution -- 6.9. Supplementary Materials -- 7. Chapter 7: Advanced Convolutional Neural Network -- 7.1. Introduction -- 7.2. AlexNet -- 7.3. Block-wise Convolutional Network (VGG) -- 7.4. Network-in Network -- 7.5. Inception Networks -- 7.5.1. GoogLeNet -- 7.5.2. Inception Network V2(Inception V2) -- 7.5.3. Inception Network V3 (Inception V3) -- 7.6. Residual Convolutional Networks -- 7.7. Dense Convolutional Networks -- 7.8. Temporal Convolutional Network -- 7.8.1. One-dimensional Convolutional Network -- 7.8.2. Causal and Dilated Convolution -- 7.8.3. Residual blocks -- 7.9. Supplementary Materials -- -- 8. Chapter 8: Introducing Recurrent Neural Networks -- 8.1. Introduction -- 8.2. Recurrent neural networks -- 8.2.1. Recurrent Neurons -- 8.2.2. Memory Cell -- 8.2.3. Recurrent Neural Network -- 8.3. Different Categories of RNNs -- 8.3.1. One-to-one RNN -- 8.3.2. One-to-many RNN -- 8.3.3. Many-to-one RNN -- 8.3.4. Many-to-many RNN -- 8.4. Backpropagation Through Time -- 8.5. Challenges facing simple RNNs -- 8.5.1. Vanishing Gradient -- 8.5.2. Exploding gradient. -- 8.5.2.1. Truncated Backpropagation through time (TBPTT) -- 8.5.3. Clipping Gradients -- 8.6. Case study: Malware Detection -- 8.7. Supplementary Materials -- -- 9. Chapter 9: Dive into Recurrent Neural Networks -- 9.1. Introduction -- 9.2. Long Short-term Memory (LSTM) -- 9.2.1. LSTM gates -- 9.2.2. Candidate Memory Cells -- 9.2.3. Memory Cell -- 9.2.4. Hidden state -- 9.3. LSTM with Peephole Connections -- 9.4. Gated Recurrent Units (GRU) -- 9.4.1. CRU cell gates -- 9.4.2. Candidate State -- 9.4.3. Hidden state -- 9.5. ConvLSTM -- 9.6. Unidirectional vs Bi-directional Recurrent Network -- 9.7. Deep Recurrent Network -- 9.8. Insights -- 9.9. Case study of Malware Detection -- 9.10. Supplementary Materials -- -- 10. Chapter 10: Attention Neural Networks -- 10.1. Introduction -- 10.2. From biological to computerized attention -- 10.2.1. Biological Attention -- 10.2.2. Queries, Keys, and Values -- 10.3. Attention Pooling: Nadaraya-Watson Kernel Regression -- 10.4. Attention Scoring Functions -- 10.4.1. Masked Softmax Operation -- 10.4.2. Additive Attention (AA) -- 10.4.3. Scaled Dot-Product Attention -- 10.5. Multi-Head Attention (MHA) -- 10.6. Self-Attention Mechanism -- 10.6.1. Self-Attention (SA) mechanism -- 10.6.2. Positional encoding -- 10.7. Transformer Network -- 10.8. Supplementary Materials -- -- 11. Chapter 11: Autoencoder Networks -- 11.1. Introduction -- 11.2. Introducing Autoencoders -- 11.2.1. Definition of Autoencoder -- 11.2.2. Structural Design -- 11.3. Convolutional Autoencoder -- 11.4. Denoising Autoencoder -- 11.5. Sparse autoencoders -- 11.6. Contractive autoencoders -- 11.7. Variational autoencoders -- 11.8. Case study -- 11.9. Supplementary Materials -- -- 12. Chapter 12: Generative Adversarial Networks (GANs) -- 12.1. Introduction -- 12.2. Foundation of Generative Adversarial Network -- 12.3. Deep Convolutional GAN -- 12.4. Conditional GAN -- 12.5. Supplementary Materials -- 13. Chapter 13: Dive into Generative Adversarial Networks -- 13.1. Introduction -- 13.2. Wasserstein GAN -- 13.2.1. Distance functions -- 13.2.2. Distance function in GANs -- 13.2.3.
Wasserstein loss -- 13.3. Least-squares GAN (LSGAN) -- 13.4. Auxiliary Classifier GAN (ACGAN) -- 13.5. Supplementary Materials -- -- 14. Chapter 14: Disentangled Representation GANs -- 14.1. Introduction -- 14.2. Disentangled representations -- 14.3. InfoGAN -- 14.4. StackedGAN -- 14.5. Supplementary Materials -- -- 15. Chapter 15: Introducing Federated Learning for Internet of Things (IoT) -- 15.1. Introduction -- 15.2. Federated Learning in Internet of Things. -- 15.3. Taxonomic view of Federated Learning -- 15.3.1. Network Structure -- 15.3.1.1. Centralized Federated Learning -- 15.3.1.2. Decentralized Federated Learning -- 15.3.1.3. Hierarchical Federated Learning -- 15.3.2. Data Partition -- 15.3.3. Horizontal Federated Learning -- 15.3.4. Vertical Federated Learning -- 15.3.5. Federated Transfer learning -- 15.4. Open-source Frameworks -- 15.4.1. TensorFlow Federated -- 15.4.2. FedML -- 15.4.3. LEAF -- 15.4.4. Paddle FL -- 15.4.5. Federated AI Technology Enabler (FATE) -- 15.4.6. OpenFL -- 15.4.7. IBM Federated Learning -- 15.4.8. NVIDIA FLARE -- 15.4.9. Flower -- 15.4.10. Sherpa.ai -- 15.5. Supplementary Materials -- -- 16. Chapter 16: Privacy-Preserved Federated Learning -- 16.1. Introduction -- 16.2. Statistical Challenges in Federated Learning -- 16.2.1. Non-Independent and Identically Distributed (Non-II ...
Abstract:
"Deep Learning Approaches for Security Threats in IoT Environments discusses approaches and measures to ensure our IoT systems are secure. This book discusses important concepts of AI and IoT and applies vital approaches that can be used to protect our systems - these include supervised, unsupervised, and semi-supervised Deep Learning approaches as well as Reinforcement and Federated Learning for privacy-preserving. This book applies Digital Forensics to IoT and discusses problems that professionals may encounter when working in the field of IoT forensics, providing ways in which smart devices can solve cyber security issues. Aimed at readers within the cyber security field, this book presents the most recent challenges that are faced in deep learning when creating a secure platform for IoT systems and addresses the possible solutions, paving the way for a more secure future"-- Provided by publisher.
Local Note:
John Wiley and Sons
Electronic Access:
https://onlinelibrary.wiley.com/doi/book/10.1002/9781119884170Copies:
Available:*
Library | Material Type | Item Barcode | Shelf Number | Status | Item Holds |
|---|---|---|---|---|---|
Searching... | E-Book | 597870-1001 | TK5105.8857 .A23 2023 | Searching... | Searching... |
