Deep Learning for Robot Perception and Cognition:
Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perc...
Gespeichert in:
Format: | Buch |
---|---|
Sprache: | English |
Veröffentlicht: |
London [u.a.]
Academic Press
2022
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Zusammenfassung: | Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks. |
Beschreibung: | xxiii, 611 Seiten Illustrationen, Diagramme |
ISBN: | 9780323857871 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV049067781 | ||
003 | DE-604 | ||
005 | 20230906 | ||
007 | t| | ||
008 | 230726s2022 xx a||| |||| 00||| eng d | ||
020 | |a 9780323857871 |9 978-0-323-85787-1 | ||
024 | 3 | |a 9780323857871 | |
035 | |a (ELiSA)ELiSA-9780323857871 | ||
035 | |a (OCoLC)1401211704 | ||
035 | |a (DE-599)HBZHT021577743 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-739 | ||
084 | |a ST 308 |0 (DE-625)143655: |2 rvk | ||
245 | 1 | 0 | |a Deep Learning for Robot Perception and Cognition |c Edited by Alexandros Iosifidis, Anastasios Tefas |
264 | 1 | |a London [u.a.] |b Academic Press |c 2022 | |
300 | |a xxiii, 611 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
505 | 8 | |a 1. Introduction 2. Neural Networks and Backpropagation 3. Convolutional Neural Networks 4. Graph Convolutional Networks 5. Recurrent Neural Networks 6. Deep Reinforcement Learning 7. Lightweight Deep Learning 8. Knowledge Distillation 9. Progressive and Compressive Deep Learning 10. Representation Learning and Retrieval 11. Object Detection and Tracking 12. Semantic Scene Segmentation for Robotics 13. 3D Object Detection and Tracking 14. Human Activity Recognition 15. Deep Learning for Vision-based Navigation in Autonomous Drone Racing 16. Robotic Grasping in Agile Production 17. Deep learning in Multiagent Systems 18. Simulation Environments 19. Biosignal time-series analysis 20. Medical Image Analysis 21. Deep learning for robotics examples using OpenDR ; | |
520 | 3 | |a Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks. | |
650 | 0 | 7 | |a Diagnostik |0 (DE-588)4113303-1 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Deep Learning |0 (DE-588)1135597375 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Bildsegmentierung |0 (DE-588)4145448-0 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Mustererkennung |0 (DE-588)4040936-3 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Faktor Mensch |0 (DE-588)4812463-1 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Sehen |0 (DE-588)4129594-8 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Robotik |0 (DE-588)4261462-4 |2 gnd |9 rswk-swf |
653 | |a Maschinelles Lernen | ||
653 | |a Communications engineering | ||
653 | |a Engineering | ||
653 | |a Single-item retail product | ||
653 | |a Robotik | ||
653 | |a Computer Vision | ||
653 | 0 | |a Deep Learning; Robot Perception; Robot Cognition; Intelligent Control; Mechatronics | |
689 | 0 | 0 | |a Deep Learning |0 (DE-588)1135597375 |D s |
689 | 0 | 1 | |a Robotik |0 (DE-588)4261462-4 |D s |
689 | 0 | 2 | |a Faktor Mensch |0 (DE-588)4812463-1 |D s |
689 | 0 | 3 | |a Diagnostik |0 (DE-588)4113303-1 |D s |
689 | 0 | 4 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | 5 | |a Mustererkennung |0 (DE-588)4040936-3 |D s |
689 | 0 | 6 | |a Bildsegmentierung |0 (DE-588)4145448-0 |D s |
689 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |D s |
689 | 0 | 8 | |a Maschinelles Sehen |0 (DE-588)4129594-8 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Iosifidis, Alexandros |e Sonstige |0 (DE-588)1277552991 |4 oth | |
700 | 1 | |a Tefas, Anastasios |e Sonstige |0 (DE-588)1277553491 |4 oth | |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034329813&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-034329813 |
Datensatz im Suchindex
_version_ | 1823932152360730624 |
---|---|
adam_text |
Contents List of contributors. XV Preface. xix Acknowledgements . xxi Editors biographies.xxiii CHAPTER 1 Introduction. 1 Alexandros iosifidis and Anastasios Tetas 1.1 1.2 1.3 1.4 1.5 Artificial intelligence and machine learning. Real world problems representation. Machine learning tasks . Shallow and deep learning. Robotics and deep learning . References. 1 2 4 8 ІЗ 15 CHAPTER 2 Neural networks and backpropagation . 17 Adamantios Zaras, Nikolaos Passalis, and Anastasios Tetas 2.1 2.2 2.3 2.4 2.5 2.6 Introduction . Activation functions. Cost functions.
Backpropagation . Optimizers and training. Overfitting. 2.6.1 Early stopping. 2.6.2 Regularization. 2.6.3 Dropout . 2.6.4 Batch normalization . Concluding remarks. References. 17 19 21 23 24 29 29 29 31 31 32 32 CHAPTER 3 Convolutional neural networks. 35 2.7 Jenni Raitoharju 3.1 3.2 Introduction. Structure of convolutional neural networks. 3.2.1 Notation. 3.2.2 Convolutional layers. 3.2.3 Activation functions . 3.2.4 Pooling layers. 3.2.5 Fully connected and output layers. 35 37 38 38 45 47 48 V
vi Contents 3.3 3.4 CHAPTER 4 3.2.6 Overall CNN structure . Training convolutional neural networks . 3.3.1 Backpropagation formulas onCNNs. 3.3.2 Loss functions. 3.3.3 Batch training and optimizers . 3.3.4 Typical challenges in CNN training. 3.3.5 Solutions to CNN training challenges . Conclusions. References. 49 51 51 56 57 59 60 65 65 Graph convolutional networks . 71 Negar Heidari, Lukas Hedegaard, and Alexandros losifidis 4.1 4.2 4.3 4.4 4.5 4.6 4.7 CHAPTER 5 Introduction. 4.1.1 Graph definition . Spectral graph convolutional network. Spatial graph convolutional network . Graph attention network (GAT). Graph convolutional networks for large graphs. 4.5.1 Layer sampling methods. 4.5.2 Graph sampling methods. Datasets and
libraries. Conclusion . References. 71 73 74 78 82 83 84 88 91 93 94 Recurrent neural networks . 101 Avraam Tsantekidis, Nikolaos Passalis, and Anastasios Tefas 5.1 5.2 5.3 5.4 5.5 5.6 5.7 Introduction. Vanilla RNN. Long-short term memory. Gated recurrent unit. Other RNN variants. Applications . Concluding remarks. References. 101 103 108 110 110 111 113 113 CHAPTER 6 Deep reinforcementlearning . 117 Avraam Tsantekidis, Nikolaos Passalis, and Anastasios Tefas 6.1 Introduction. 6.2 Value-based methods. 6.2.1 Q-learning
. 6.2.2 Deep Q-learning. 117 119 120 121
Contents 6.3 6.4 Policy-based methods . 6.3.1 Policy gradient . 6.3.2 Actor-critic methods. 6.3.3 Deep policy gradient-based methods. Concluding remarks. References. 123 123 124 125 127 127 CHA PTER 7 Lightweight deep learning. ш Paraskevi Nousi, Maria Tzelepi, Nikolaos Passalis, and Anastasios Tetas 7.1 7.2 7.3 7.4 7.5 7.6 CHAPTER 8 Introduction. Lightweight convolutional neural network architectures. 7.2.1 Lightweight CNNs for classification . 7.2.2 Lightweight object detection. Regularization of lightweight convolutional neural networks . 7.3.1 Graph embedded-based regularizer. 7.3.2 Class-specific discriminant regularizer. 7.3.3 Mutual information regularizer. Bag-of-features for improved representation learning. 7.4.1 Convolutional feature histograms for real-time tracking Early exits for adaptive inference. 7.5.1 Early exits using bag-of-features.
7.5.2 Adaptive inference with early exits . Concluding remarks. References. 131 133 133 134 138 139 143 144 146 150 154 156 158 159 160 Knowledge distillation . 165 Nikolaos Passalis, Maria Tzelepi, and Anastasios Tefas 8.1 8.2 8.3 8.4 8.5 8.6 Introduction. Neural network distillation. Probabilistic knowledge transfer . Multilayer knowledge distillation. 8.4.1 Hint-based distillation. 8.4.2 Flow of solution procedure distillation. 8.4.3 Other multilayer distillation methods. Teacher training strategies. Concluding remarks. References. 165 167 170 178 178 179 181 181 182 183 CHAPTER 9 Progressive and compressive learning . 187 Dat Thanh Tran, Moncef Gabbouj, and Alexandros losifidis 9.1 9.2 Introduction. 187 Progressive neural network
learning. 188 vii
viii Contents Broad learning system. Progressive learning network. Progressive operational perceptron and its variants . Heterogeneous multilayer generalized operational perceptron. 201 9.2.5 Subset sampling and online hyperparameter search for training enhancement . 204 9.3 Compressive learning . 9.3.1 Vector-based compressive learning . 9.3.2 Tensor-based compressive learning. 9.4 Conclusions. References. 9.2.1 9.2.2 9.2.3 9.2.4 190 194 196 206 207 210 216 217 CHAPTER 10 Representation learning and retrieval . 221 Maria Tzelepi, Paraskevi Nousi, Nikolaos Passalis, and Anastasios Tetas 10.1 10.2 10.3 10.4 10.5 10.6 Introduction. Discriminative and self-supervised autoencoders . Deep representation learning for content based image retrieval Model retraining methods for image retrieval. 10.4.1 Fully unsupervised retraining . 10.4.2 Retraining with relevance information. 10.4.3 Relevance
feedback based retraining. Variance preserving supervised representation learning . Concluding remarks. References. 221 222 226 227 229 230 232 234 238 239 CHAPTER 11 Object detection and tracking . 243 Kateryna Chumachenko, Moncef Gabbouj, and Alexandros losifidis Object detection. 11.1.1 Object detection essentials . 11.1.2 Two-stage object detectors . 11.1.3 One-stage detectors. 11.1.4 Anchor-free detectors. 11.2 Object tracking . 11.2.1 Single object tracking. 11.2.2 Multiple object tracking . 11.3 Conclusion . References. 11.1 243 244 248 252 255 257 258 267 274 275 CHAPTER 12 Semanticscene segmentation for robotics. 279 Juana Valeria Hurtado and Abhinav Valada 12.1
Introduction. 279
Contents 12.2 12.3 12.4 12.5 12.6 12.7 Algorithms and architectures for semantic segmentation. 12.2.1 Traditional methods . 12.2.2 Deep learning methods . 12.2.3 Encoder variants. 12.2.4 Upsampling methods. 12.2.5 Techniques for exploiting context . 12.2.6 Real-time architectures. 12.2.7 Object detection-based methods . Loss functions for semantic segmentation . 12.3.1 Pixelwise cross entropy loss . 12.3.2 Dice loss. Semantic segmentation using multiple inputs. 12.4.1 Video semantic segmentation . 12.4.2 Point cloud semantic segmentation . 12.4.3 Multimodal semantic segmentation. Semantic segmentation data sets and benchmarks. 12.5.1 Outdoor data sets . 12.5.2 Indoor data sets. 12.5.3 General purpose data sets . Semantic segmentation metrics
. 12.6.1 Accuracy . 12.6.2 Computational complexity . Conclusion . References. CHAPTER 13 3D object detection and tracking . 282 282 283 285 285 286 288 290 290 291 291 291 291 292 293 295 296 297 299 300 300 302 303 304 313 lilia Oleksiienko and Alexandros losifidis Introduction. 3D object detection . 13.2.1 Input data for 3D object detection. 13.2.2 3D object detection data sets and metrics. 13.2.3 Lidar-based 3D object detection methods . 13.2.4 Image+Lidar-based 3D object detection. 13.2.5 Monocular 3D object detection. 13.2.6 Binocular 3D object detection. 13.3 3D object tracking. 13.3.1 3D object tracking data sets and metrics . 13.3.2 3D object tracking methods. 13.4 Conclusion .
References. 13.1 13.2 CHAPTER 14 Human activity recognition. 313 314 314 316 317 327 329 330 331 332 334 336 337 341 Lukas Hedegaard, Negar Heidari, and Alexandros losifidis 14.1 Introduction . 341 ix
x Contents 14.2 14.3 14.4 14.5 14.6 14.1.1 Tasks in human activity recognition. 14.1.2 Input modalities for human activity recognition. Trimmed action recognition. 14.2.1 2D convolutional and recurrent neural network-based architectures. 14.2.2 3D convolutional neural network architectures. 14.2.3 Inflated 3D CNN architectures . 14.2.4 Factorized (2+1 )D CNN architectures. 14.2.5 Skeleton-based action recognition. 14.2.6 Multistream architectures . Temporal action localization . Spatiotemporal action localization. Data sets for human activity recognition. Conclusion . References. 342 343 344 345 347 349 350 352 358 360 361 362 364 365 CHAPTER 15 Deep learning for vision-based navigation in autonomous drone racing . 371 Huy Xuan Pham, Halil Ibrahim Uğurlu, Jonas Le Fevre, Deniz Bardakci, and Erdal Kayacan Introduction. System decomposition approach in
drone racing navigation . 15.2.1 Related work. 15.2.2 Drone hardware . 15.2.3 State estimation. 15.2.4 Control for agile quadrotor flight. 15.2.5 Motion planning for agile flight. 15.2.6 Deep learning for perception. 15.2.7 Experimental results. 15.3 Transfer learning and end-to-end planning. 15.3.1 Related work. 15.3.2 Sim-to-real transfer with domain randomization . 15.3.3 Perceive and control with variational autoencoders . . . 15.3.4 Deep reinforcement learning. 15.4 Useful tools for data collection and training. 15.4.1 Simulation environments for autonomous drone racing 15.4.2 Data sets. 15.5 Conclusions and future work. 15.5.1 Conclusions . 15.5.2 Future work . References. 15.1 15.2 371 373
373 374 375 376 378 380 383 386 386 386 387 389 394 394 397 398 398 400 400
Contents CHAPTER 16 Robotie grasping in agile production Amir Mehman Setat, Saad Ahmad, Alexandre Angleraud, Esa Rahtu, and Roel Pieters 16.1 Introduction. 16.1.1 Robot tasks in agile production. 16.1.2 Deep learning in agile production . 16.1.3 Requirements in agile production . 16.1.4 Limitations in agile production . 16.2 Grasping and object manipulation . 16.3 16.4 16.5 16.6 16.2.1 Problem statement. 16.2.2 Analytical versus data-driven approaches . 16.2.3 Grasp detection with RGB-D. 16.2.4 Grasp detection with point clouds. Grasp evaluation . 16.3.1 Metrics. 16.3.2 Pose estimation with PVN3D . 16.3.3 Grasp detection with 6-DOF GraspNet . 16.3.4 Pick-and-place results. Manipulation benchmarking . Datasets. Conclusion
. References. 407 407 408 409 409 410 411 411 413 413 415 416 417 419 420 422 424 425 427 428 CHAPTER 17 Deep learning in multiagent systems. 435 Lukas Esterle 17.1 17.2 17.3 17.4 Introduction. Setting the scene . Challenges. Deep learning in multiagent systems . 17.4.1 Individual learning . 17.4.2 Collaborative and cooperative learning . 17.5 Conclusion . References. 435 436 439 442 443 447 454 456 CHAPTER 18 Simulation environments. Charalampos Symeonidis and Nikos Nikolaidis 18.1 Introduction. 18.1.1 Robotic simulators architecture. 18.1.2 Simulation types. 18.1.3 Qualitative characteristics. 18.2 Robotic
simulators. 461 461 462 463 464 465 18.2.1 Gazebo. 465 18.2.2 AirSim. 472 18.2.3 Webots. 475 x¡
xii Contents 18.3 18.2.4 CARLA . 18.2.5 CoppeliaSim. 18.2.6 Other simulators. Conclusions. References. 478 480 482 486 486 CHAPTER 19 Biosignal time-series analysis. 491 Serkan Kiranyaz, Turker Ince, Muhammad E.H. Chowdhury, Aysen Değerli, and Moncef Gabbouj 19.1 19.2 19.3 19.4 Introduction. ECG classification and advance warning for arrhythmia. 19.2.1 Patient-specific ECG classification by ID convolutional neural networks. 495 19.2.2 Personalized advance warning system for cardiac arrhythmias. 505 Early prediction of mortality risk for COVID-19 patients . 19.3.1 Introduction and motivation. 19.3.2 Methodology . 19.3.3 Results and discussion. Conclusion .
References. 491 492 517 517 521 526 534 535 CHAPTER 20 Medical image analysis. 541 Aysen Değerli, Mehmet Yamae, Mete Ah ishal I, Serkan Kiranyaz, and Moncef Gabbouj 20.1 Introduction. 20.2 Early detection of myocardial infarction using 541 echocardiography . 20.2.1 Methodology . 20.2.2 Experimental evaluation . 20.3 COVID-19 recognition from X-ray images via convolutional sparse support estimator based classifier. 20.3.1 Preliminaries. 20.3.2 CSEN-based COVID-19 recognition system . 20.3.3 Experimental evaluations . 20.4 Conclusion . References. 543 546 553 558 559 562 568 572 573 CHAPTER 21 Deep learning for robotics examples using OpenDR 579 Manos Kirtas, Konstantinos Tsampazis, Pavlos Tosidis, Nikolaos Passalis, and Anastasios Tefas 21.1 21.2 21.3 Introduction . 579 Structure of OpenDR
toolkit and application examples. 580 Cointegration of simulation and training . 587
Contents 21.4 21.3.1 One-node architecture. 21.3.2 Emitter-receiver architecture. 21.3.3 Design decisions. Concluding remarks. References. 589 590 594 595 595 Index . 597 xiii |
adam_txt |
Contents List of contributors. XV Preface. xix Acknowledgements . xxi Editors biographies.xxiii CHAPTER 1 Introduction. 1 Alexandros iosifidis and Anastasios Tetas 1.1 1.2 1.3 1.4 1.5 Artificial intelligence and machine learning. Real world problems representation. Machine learning tasks . Shallow and deep learning. Robotics and deep learning . References. 1 2 4 8 ІЗ 15 CHAPTER 2 Neural networks and backpropagation . 17 Adamantios Zaras, Nikolaos Passalis, and Anastasios Tetas 2.1 2.2 2.3 2.4 2.5 2.6 Introduction . Activation functions. Cost functions.
Backpropagation . Optimizers and training. Overfitting. 2.6.1 Early stopping. 2.6.2 Regularization. 2.6.3 Dropout . 2.6.4 Batch normalization . Concluding remarks. References. 17 19 21 23 24 29 29 29 31 31 32 32 CHAPTER 3 Convolutional neural networks. 35 2.7 Jenni Raitoharju 3.1 3.2 Introduction. Structure of convolutional neural networks. 3.2.1 Notation. 3.2.2 Convolutional layers. 3.2.3 Activation functions . 3.2.4 Pooling layers. 3.2.5 Fully connected and output layers. 35 37 38 38 45 47 48 V
vi Contents 3.3 3.4 CHAPTER 4 3.2.6 Overall CNN structure . Training convolutional neural networks . 3.3.1 Backpropagation formulas onCNNs. 3.3.2 Loss functions. 3.3.3 Batch training and optimizers . 3.3.4 Typical challenges in CNN training. 3.3.5 Solutions to CNN training challenges . Conclusions. References. 49 51 51 56 57 59 60 65 65 Graph convolutional networks . 71 Negar Heidari, Lukas Hedegaard, and Alexandros losifidis 4.1 4.2 4.3 4.4 4.5 4.6 4.7 CHAPTER 5 Introduction. 4.1.1 Graph definition . Spectral graph convolutional network. Spatial graph convolutional network . Graph attention network (GAT). Graph convolutional networks for large graphs. 4.5.1 Layer sampling methods. 4.5.2 Graph sampling methods. Datasets and
libraries. Conclusion . References. 71 73 74 78 82 83 84 88 91 93 94 Recurrent neural networks . 101 Avraam Tsantekidis, Nikolaos Passalis, and Anastasios Tefas 5.1 5.2 5.3 5.4 5.5 5.6 5.7 Introduction. Vanilla RNN. Long-short term memory. Gated recurrent unit. Other RNN variants. Applications . Concluding remarks. References. 101 103 108 110 110 111 113 113 CHAPTER 6 Deep reinforcementlearning . 117 Avraam Tsantekidis, Nikolaos Passalis, and Anastasios Tefas 6.1 Introduction. 6.2 Value-based methods. 6.2.1 Q-learning
. 6.2.2 Deep Q-learning. 117 119 120 121
Contents 6.3 6.4 Policy-based methods . 6.3.1 Policy gradient . 6.3.2 Actor-critic methods. 6.3.3 Deep policy gradient-based methods. Concluding remarks. References. 123 123 124 125 127 127 CHA PTER 7 Lightweight deep learning. ш Paraskevi Nousi, Maria Tzelepi, Nikolaos Passalis, and Anastasios Tetas 7.1 7.2 7.3 7.4 7.5 7.6 CHAPTER 8 Introduction. Lightweight convolutional neural network architectures. 7.2.1 Lightweight CNNs for classification . 7.2.2 Lightweight object detection. Regularization of lightweight convolutional neural networks . 7.3.1 Graph embedded-based regularizer. 7.3.2 Class-specific discriminant regularizer. 7.3.3 Mutual information regularizer. Bag-of-features for improved representation learning. 7.4.1 Convolutional feature histograms for real-time tracking Early exits for adaptive inference. 7.5.1 Early exits using bag-of-features.
7.5.2 Adaptive inference with early exits . Concluding remarks. References. 131 133 133 134 138 139 143 144 146 150 154 156 158 159 160 Knowledge distillation . 165 Nikolaos Passalis, Maria Tzelepi, and Anastasios Tefas 8.1 8.2 8.3 8.4 8.5 8.6 Introduction. Neural network distillation. Probabilistic knowledge transfer . Multilayer knowledge distillation. 8.4.1 Hint-based distillation. 8.4.2 Flow of solution procedure distillation. 8.4.3 Other multilayer distillation methods. Teacher training strategies. Concluding remarks. References. 165 167 170 178 178 179 181 181 182 183 CHAPTER 9 Progressive and compressive learning . 187 Dat Thanh Tran, Moncef Gabbouj, and Alexandros losifidis 9.1 9.2 Introduction. 187 Progressive neural network
learning. 188 vii
viii Contents Broad learning system. Progressive learning network. Progressive operational perceptron and its variants . Heterogeneous multilayer generalized operational perceptron. 201 9.2.5 Subset sampling and online hyperparameter search for training enhancement . 204 9.3 Compressive learning . 9.3.1 Vector-based compressive learning . 9.3.2 Tensor-based compressive learning. 9.4 Conclusions. References. 9.2.1 9.2.2 9.2.3 9.2.4 190 194 196 206 207 210 216 217 CHAPTER 10 Representation learning and retrieval . 221 Maria Tzelepi, Paraskevi Nousi, Nikolaos Passalis, and Anastasios Tetas 10.1 10.2 10.3 10.4 10.5 10.6 Introduction. Discriminative and self-supervised autoencoders . Deep representation learning for content based image retrieval Model retraining methods for image retrieval. 10.4.1 Fully unsupervised retraining . 10.4.2 Retraining with relevance information. 10.4.3 Relevance
feedback based retraining. Variance preserving supervised representation learning . Concluding remarks. References. 221 222 226 227 229 230 232 234 238 239 CHAPTER 11 Object detection and tracking . 243 Kateryna Chumachenko, Moncef Gabbouj, and Alexandros losifidis Object detection. 11.1.1 Object detection essentials . 11.1.2 Two-stage object detectors . 11.1.3 One-stage detectors. 11.1.4 Anchor-free detectors. 11.2 Object tracking . 11.2.1 Single object tracking. 11.2.2 Multiple object tracking . 11.3 Conclusion . References. 11.1 243 244 248 252 255 257 258 267 274 275 CHAPTER 12 Semanticscene segmentation for robotics. 279 Juana Valeria Hurtado and Abhinav Valada 12.1
Introduction. 279
Contents 12.2 12.3 12.4 12.5 12.6 12.7 Algorithms and architectures for semantic segmentation. 12.2.1 Traditional methods . 12.2.2 Deep learning methods . 12.2.3 Encoder variants. 12.2.4 Upsampling methods. 12.2.5 Techniques for exploiting context . 12.2.6 Real-time architectures. 12.2.7 Object detection-based methods . Loss functions for semantic segmentation . 12.3.1 Pixelwise cross entropy loss . 12.3.2 Dice loss. Semantic segmentation using multiple inputs. 12.4.1 Video semantic segmentation . 12.4.2 Point cloud semantic segmentation . 12.4.3 Multimodal semantic segmentation. Semantic segmentation data sets and benchmarks. 12.5.1 Outdoor data sets . 12.5.2 Indoor data sets. 12.5.3 General purpose data sets . Semantic segmentation metrics
. 12.6.1 Accuracy . 12.6.2 Computational complexity . Conclusion . References. CHAPTER 13 3D object detection and tracking . 282 282 283 285 285 286 288 290 290 291 291 291 291 292 293 295 296 297 299 300 300 302 303 304 313 lilia Oleksiienko and Alexandros losifidis Introduction. 3D object detection . 13.2.1 Input data for 3D object detection. 13.2.2 3D object detection data sets and metrics. 13.2.3 Lidar-based 3D object detection methods . 13.2.4 Image+Lidar-based 3D object detection. 13.2.5 Monocular 3D object detection. 13.2.6 Binocular 3D object detection. 13.3 3D object tracking. 13.3.1 3D object tracking data sets and metrics . 13.3.2 3D object tracking methods. 13.4 Conclusion .
References. 13.1 13.2 CHAPTER 14 Human activity recognition. 313 314 314 316 317 327 329 330 331 332 334 336 337 341 Lukas Hedegaard, Negar Heidari, and Alexandros losifidis 14.1 Introduction . 341 ix
x Contents 14.2 14.3 14.4 14.5 14.6 14.1.1 Tasks in human activity recognition. 14.1.2 Input modalities for human activity recognition. Trimmed action recognition. 14.2.1 2D convolutional and recurrent neural network-based architectures. 14.2.2 3D convolutional neural network architectures. 14.2.3 Inflated 3D CNN architectures . 14.2.4 Factorized (2+1 )D CNN architectures. 14.2.5 Skeleton-based action recognition. 14.2.6 Multistream architectures . Temporal action localization . Spatiotemporal action localization. Data sets for human activity recognition. Conclusion . References. 342 343 344 345 347 349 350 352 358 360 361 362 364 365 CHAPTER 15 Deep learning for vision-based navigation in autonomous drone racing . 371 Huy Xuan Pham, Halil Ibrahim Uğurlu, Jonas Le Fevre, Deniz Bardakci, and Erdal Kayacan Introduction. System decomposition approach in
drone racing navigation . 15.2.1 Related work. 15.2.2 Drone hardware . 15.2.3 State estimation. 15.2.4 Control for agile quadrotor flight. 15.2.5 Motion planning for agile flight. 15.2.6 Deep learning for perception. 15.2.7 Experimental results. 15.3 Transfer learning and end-to-end planning. 15.3.1 Related work. 15.3.2 Sim-to-real transfer with domain randomization . 15.3.3 Perceive and control with variational autoencoders . . . 15.3.4 Deep reinforcement learning. 15.4 Useful tools for data collection and training. 15.4.1 Simulation environments for autonomous drone racing 15.4.2 Data sets. 15.5 Conclusions and future work. 15.5.1 Conclusions . 15.5.2 Future work . References. 15.1 15.2 371 373
373 374 375 376 378 380 383 386 386 386 387 389 394 394 397 398 398 400 400
Contents CHAPTER 16 Robotie grasping in agile production Amir Mehman Setat, Saad Ahmad, Alexandre Angleraud, Esa Rahtu, and Roel Pieters 16.1 Introduction. 16.1.1 Robot tasks in agile production. 16.1.2 Deep learning in agile production . 16.1.3 Requirements in agile production . 16.1.4 Limitations in agile production . 16.2 Grasping and object manipulation . 16.3 16.4 16.5 16.6 16.2.1 Problem statement. 16.2.2 Analytical versus data-driven approaches . 16.2.3 Grasp detection with RGB-D. 16.2.4 Grasp detection with point clouds. Grasp evaluation . 16.3.1 Metrics. 16.3.2 Pose estimation with PVN3D . 16.3.3 Grasp detection with 6-DOF GraspNet . 16.3.4 Pick-and-place results. Manipulation benchmarking . Datasets. Conclusion
. References. 407 407 408 409 409 410 411 411 413 413 415 416 417 419 420 422 424 425 427 428 CHAPTER 17 Deep learning in multiagent systems. 435 Lukas Esterle 17.1 17.2 17.3 17.4 Introduction. Setting the scene . Challenges. Deep learning in multiagent systems . 17.4.1 Individual learning . 17.4.2 Collaborative and cooperative learning . 17.5 Conclusion . References. 435 436 439 442 443 447 454 456 CHAPTER 18 Simulation environments. Charalampos Symeonidis and Nikos Nikolaidis 18.1 Introduction. 18.1.1 Robotic simulators architecture. 18.1.2 Simulation types. 18.1.3 Qualitative characteristics. 18.2 Robotic
simulators. 461 461 462 463 464 465 18.2.1 Gazebo. 465 18.2.2 AirSim. 472 18.2.3 Webots. 475 x¡
xii Contents 18.3 18.2.4 CARLA . 18.2.5 CoppeliaSim. 18.2.6 Other simulators. Conclusions. References. 478 480 482 486 486 CHAPTER 19 Biosignal time-series analysis. 491 Serkan Kiranyaz, Turker Ince, Muhammad E.H. Chowdhury, Aysen Değerli, and Moncef Gabbouj 19.1 19.2 19.3 19.4 Introduction. ECG classification and advance warning for arrhythmia. 19.2.1 Patient-specific ECG classification by ID convolutional neural networks. 495 19.2.2 Personalized advance warning system for cardiac arrhythmias. 505 Early prediction of mortality risk for COVID-19 patients . 19.3.1 Introduction and motivation. 19.3.2 Methodology . 19.3.3 Results and discussion. Conclusion .
References. 491 492 517 517 521 526 534 535 CHAPTER 20 Medical image analysis. 541 Aysen Değerli, Mehmet Yamae, Mete Ah ishal I, Serkan Kiranyaz, and Moncef Gabbouj 20.1 Introduction. 20.2 Early detection of myocardial infarction using 541 echocardiography . 20.2.1 Methodology . 20.2.2 Experimental evaluation . 20.3 COVID-19 recognition from X-ray images via convolutional sparse support estimator based classifier. 20.3.1 Preliminaries. 20.3.2 CSEN-based COVID-19 recognition system . 20.3.3 Experimental evaluations . 20.4 Conclusion . References. 543 546 553 558 559 562 568 572 573 CHAPTER 21 Deep learning for robotics examples using OpenDR 579 Manos Kirtas, Konstantinos Tsampazis, Pavlos Tosidis, Nikolaos Passalis, and Anastasios Tefas 21.1 21.2 21.3 Introduction . 579 Structure of OpenDR
toolkit and application examples. 580 Cointegration of simulation and training . 587
Contents 21.4 21.3.1 One-node architecture. 21.3.2 Emitter-receiver architecture. 21.3.3 Design decisions. Concluding remarks. References. 589 590 594 595 595 Index . 597 xiii |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author_GND | (DE-588)1277552991 (DE-588)1277553491 |
building | Verbundindex |
bvnumber | BV049067781 |
classification_rvk | ST 308 |
contents | 1. Introduction 2. Neural Networks and Backpropagation 3. Convolutional Neural Networks 4. Graph Convolutional Networks 5. Recurrent Neural Networks 6. Deep Reinforcement Learning 7. Lightweight Deep Learning 8. Knowledge Distillation 9. Progressive and Compressive Deep Learning 10. Representation Learning and Retrieval 11. Object Detection and Tracking 12. Semantic Scene Segmentation for Robotics 13. 3D Object Detection and Tracking 14. Human Activity Recognition 15. Deep Learning for Vision-based Navigation in Autonomous Drone Racing 16. Robotic Grasping in Agile Production 17. Deep learning in Multiagent Systems 18. Simulation Environments 19. Biosignal time-series analysis 20. Medical Image Analysis 21. Deep learning for robotics examples using OpenDR ; |
ctrlnum | (ELiSA)ELiSA-9780323857871 (OCoLC)1401211704 (DE-599)HBZHT021577743 |
discipline | Informatik |
discipline_str_mv | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nam a2200000 c 4500</leader><controlfield tag="001">BV049067781</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20230906</controlfield><controlfield tag="007">t|</controlfield><controlfield tag="008">230726s2022 xx a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780323857871</subfield><subfield code="9">978-0-323-85787-1</subfield></datafield><datafield tag="024" ind1="3" ind2=" "><subfield code="a">9780323857871</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELiSA)ELiSA-9780323857871</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1401211704</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)HBZHT021577743</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-739</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 308</subfield><subfield code="0">(DE-625)143655:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Deep Learning for Robot Perception and Cognition</subfield><subfield code="c">Edited by Alexandros Iosifidis, Anastasios Tefas</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">London [u.a.]</subfield><subfield code="b">Academic Press</subfield><subfield code="c">2022</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xxiii, 611 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">1. Introduction 2. Neural Networks and Backpropagation 3. Convolutional Neural Networks 4. Graph Convolutional Networks 5. Recurrent Neural Networks 6. Deep Reinforcement Learning 7. Lightweight Deep Learning 8. Knowledge Distillation 9. Progressive and Compressive Deep Learning 10. Representation Learning and Retrieval 11. Object Detection and Tracking 12. Semantic Scene Segmentation for Robotics 13. 3D Object Detection and Tracking 14. Human Activity Recognition 15. Deep Learning for Vision-based Navigation in Autonomous Drone Racing 16. Robotic Grasping in Agile Production 17. Deep learning in Multiagent Systems 18. Simulation Environments 19. Biosignal time-series analysis 20. Medical Image Analysis 21. Deep learning for robotics examples using OpenDR ;</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks.</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Diagnostik</subfield><subfield code="0">(DE-588)4113303-1</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Deep Learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Bildsegmentierung</subfield><subfield code="0">(DE-588)4145448-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Mustererkennung</subfield><subfield code="0">(DE-588)4040936-3</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Faktor Mensch</subfield><subfield code="0">(DE-588)4812463-1</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Sehen</subfield><subfield code="0">(DE-588)4129594-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Robotik</subfield><subfield code="0">(DE-588)4261462-4</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Maschinelles Lernen</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Communications engineering</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Engineering</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Single-item retail product</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Robotik</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Computer Vision</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Deep Learning; Robot Perception; Robot Cognition; Intelligent Control; Mechatronics</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Deep Learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Robotik</subfield><subfield code="0">(DE-588)4261462-4</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Faktor Mensch</subfield><subfield code="0">(DE-588)4812463-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Diagnostik</subfield><subfield code="0">(DE-588)4113303-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="4"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="5"><subfield code="a">Mustererkennung</subfield><subfield code="0">(DE-588)4040936-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="6"><subfield code="a">Bildsegmentierung</subfield><subfield code="0">(DE-588)4145448-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="8"><subfield code="a">Maschinelles Sehen</subfield><subfield code="0">(DE-588)4129594-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Iosifidis, Alexandros</subfield><subfield code="e">Sonstige</subfield><subfield code="0">(DE-588)1277552991</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tefas, Anastasios</subfield><subfield code="e">Sonstige</subfield><subfield code="0">(DE-588)1277553491</subfield><subfield code="4">oth</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034329813&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-034329813</subfield></datafield></record></collection> |
id | DE-604.BV049067781 |
illustrated | Illustrated |
index_date | 2024-07-03T22:26:10Z |
indexdate | 2025-02-13T09:00:48Z |
institution | BVB |
isbn | 9780323857871 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-034329813 |
oclc_num | 1401211704 |
open_access_boolean | |
owner | DE-739 |
owner_facet | DE-739 |
physical | xxiii, 611 Seiten Illustrationen, Diagramme |
publishDate | 2022 |
publishDateSearch | 2022 |
publishDateSort | 2022 |
publisher | Academic Press |
record_format | marc |
spelling | Deep Learning for Robot Perception and Cognition Edited by Alexandros Iosifidis, Anastasios Tefas London [u.a.] Academic Press 2022 xxiii, 611 Seiten Illustrationen, Diagramme txt rdacontent n rdamedia nc rdacarrier 1. Introduction 2. Neural Networks and Backpropagation 3. Convolutional Neural Networks 4. Graph Convolutional Networks 5. Recurrent Neural Networks 6. Deep Reinforcement Learning 7. Lightweight Deep Learning 8. Knowledge Distillation 9. Progressive and Compressive Deep Learning 10. Representation Learning and Retrieval 11. Object Detection and Tracking 12. Semantic Scene Segmentation for Robotics 13. 3D Object Detection and Tracking 14. Human Activity Recognition 15. Deep Learning for Vision-based Navigation in Autonomous Drone Racing 16. Robotic Grasping in Agile Production 17. Deep learning in Multiagent Systems 18. Simulation Environments 19. Biosignal time-series analysis 20. Medical Image Analysis 21. Deep learning for robotics examples using OpenDR ; Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks. Diagnostik (DE-588)4113303-1 gnd rswk-swf Deep Learning (DE-588)1135597375 gnd rswk-swf Bildsegmentierung (DE-588)4145448-0 gnd rswk-swf Mustererkennung (DE-588)4040936-3 gnd rswk-swf Faktor Mensch (DE-588)4812463-1 gnd rswk-swf Maschinelles Sehen (DE-588)4129594-8 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Neuronales Netz (DE-588)4226127-2 gnd rswk-swf Robotik (DE-588)4261462-4 gnd rswk-swf Maschinelles Lernen Communications engineering Engineering Single-item retail product Robotik Computer Vision Deep Learning; Robot Perception; Robot Cognition; Intelligent Control; Mechatronics Deep Learning (DE-588)1135597375 s Robotik (DE-588)4261462-4 s Faktor Mensch (DE-588)4812463-1 s Diagnostik (DE-588)4113303-1 s Maschinelles Lernen (DE-588)4193754-5 s Mustererkennung (DE-588)4040936-3 s Bildsegmentierung (DE-588)4145448-0 s Neuronales Netz (DE-588)4226127-2 s Maschinelles Sehen (DE-588)4129594-8 s DE-604 Iosifidis, Alexandros Sonstige (DE-588)1277552991 oth Tefas, Anastasios Sonstige (DE-588)1277553491 oth Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034329813&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Deep Learning for Robot Perception and Cognition 1. Introduction 2. Neural Networks and Backpropagation 3. Convolutional Neural Networks 4. Graph Convolutional Networks 5. Recurrent Neural Networks 6. Deep Reinforcement Learning 7. Lightweight Deep Learning 8. Knowledge Distillation 9. Progressive and Compressive Deep Learning 10. Representation Learning and Retrieval 11. Object Detection and Tracking 12. Semantic Scene Segmentation for Robotics 13. 3D Object Detection and Tracking 14. Human Activity Recognition 15. Deep Learning for Vision-based Navigation in Autonomous Drone Racing 16. Robotic Grasping in Agile Production 17. Deep learning in Multiagent Systems 18. Simulation Environments 19. Biosignal time-series analysis 20. Medical Image Analysis 21. Deep learning for robotics examples using OpenDR ; Diagnostik (DE-588)4113303-1 gnd Deep Learning (DE-588)1135597375 gnd Bildsegmentierung (DE-588)4145448-0 gnd Mustererkennung (DE-588)4040936-3 gnd Faktor Mensch (DE-588)4812463-1 gnd Maschinelles Sehen (DE-588)4129594-8 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Neuronales Netz (DE-588)4226127-2 gnd Robotik (DE-588)4261462-4 gnd |
subject_GND | (DE-588)4113303-1 (DE-588)1135597375 (DE-588)4145448-0 (DE-588)4040936-3 (DE-588)4812463-1 (DE-588)4129594-8 (DE-588)4193754-5 (DE-588)4226127-2 (DE-588)4261462-4 |
title | Deep Learning for Robot Perception and Cognition |
title_auth | Deep Learning for Robot Perception and Cognition |
title_exact_search | Deep Learning for Robot Perception and Cognition |
title_exact_search_txtP | Deep Learning for Robot Perception and Cognition |
title_full | Deep Learning for Robot Perception and Cognition Edited by Alexandros Iosifidis, Anastasios Tefas |
title_fullStr | Deep Learning for Robot Perception and Cognition Edited by Alexandros Iosifidis, Anastasios Tefas |
title_full_unstemmed | Deep Learning for Robot Perception and Cognition Edited by Alexandros Iosifidis, Anastasios Tefas |
title_short | Deep Learning for Robot Perception and Cognition |
title_sort | deep learning for robot perception and cognition |
topic | Diagnostik (DE-588)4113303-1 gnd Deep Learning (DE-588)1135597375 gnd Bildsegmentierung (DE-588)4145448-0 gnd Mustererkennung (DE-588)4040936-3 gnd Faktor Mensch (DE-588)4812463-1 gnd Maschinelles Sehen (DE-588)4129594-8 gnd Maschinelles Lernen (DE-588)4193754-5 gnd Neuronales Netz (DE-588)4226127-2 gnd Robotik (DE-588)4261462-4 gnd |
topic_facet | Diagnostik Deep Learning Bildsegmentierung Mustererkennung Faktor Mensch Maschinelles Sehen Maschinelles Lernen Neuronales Netz Robotik |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034329813&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT iosifidisalexandros deeplearningforrobotperceptionandcognition AT tefasanastasios deeplearningforrobotperceptionandcognition |