Deep learning with Pytorch:
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Shelter Island
Manning Publications
[2020]
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Beschreibung: | xxviii, 490 Seiten Illustrationen, Diagramme |
ISBN: | 9781617295263 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV047029115 | ||
003 | DE-604 | ||
005 | 20220519 | ||
007 | t | ||
008 | 201125s2020 a||| |||| 00||| eng d | ||
020 | |a 9781617295263 |9 978-1-61729-526-3 | ||
035 | |a (OCoLC)1224485196 | ||
035 | |a (DE-599)OBVAC15482304 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-573 |a DE-91 |a DE-29T |a DE-739 |a DE-473 |a DE-1050 |a DE-1029 |a DE-898 | ||
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
084 | |a DAT 366 |2 stub | ||
084 | |a DAT 708 |2 stub | ||
100 | 1 | |a Stevens, Eli |e Verfasser |0 (DE-588)125070703X |4 aut | |
245 | 1 | 0 | |a Deep learning with Pytorch |c Eli Stevens, Luca Antiga, and Thomas Viehmann ; foreword by Soumith Chintala |
264 | 1 | |a Shelter Island |b Manning Publications |c [2020] | |
264 | 4 | |c © 2020 | |
300 | |a xxviii, 490 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Programmbibliothek |0 (DE-588)4121521-7 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Python |g Programmiersprache |0 (DE-588)4434275-5 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | 1 | |a Programmbibliothek |0 (DE-588)4121521-7 |D s |
689 | 0 | 2 | |a Python |g Programmiersprache |0 (DE-588)4434275-5 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Antiga, Luca |e Verfasser |0 (DE-588)1250708079 |4 aut | |
700 | 1 | |a Viehmann, Thomas |e Verfasser |0 (DE-588)1232540455 |4 aut | |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032436436&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-032436436 |
Datensatz im Suchindex
_version_ | 1804182000087597056 |
---|---|
adam_text | contents foreword xv preface xvii acknowledgments xix about this book xxi about the authors xxvii about the cover illustration Core PyTorch Part 1 xxviii мннін«мнмммстмнвми»мівмни«иен«« J Introducing deep learning and the PyTorch Library 1.1 The deep learning revolution 1.2 PyTorch for deep learning 1.3 Why PyTorch? 3 4 6 7 The deep learning competitive landscape 8 1.4 An overview of how PyTorch supports deep learning projects 10 1.5 Hardware and software requirements UsingJupyter Notebooks 1.6 Exercises 15 1.7 Summary 15 14 13 Л
CONTENTS Pretrained networks 2.1 CONTENTS 16 A pretrained network that recognizes the subject of an image 17 Obtaining a pretrained network for image recognition 19 AlexNet 20 · ResNet 22 ■ Ready, set, almost run 22 Run! 25 2.2 A pretrained model that fakes it until it makes it The GAN game 28 · CycleGAN horses into zebras 30 2.3 29 · A network that turns 33 34 2.4 Torch Hub 35 2.5 Conclusion 37 2.6 Exercises 2.7 Summary 38 3.10 NumPy interoperability 64 3.11 3.12 Generalized tensors are tensors, too Serializing tensors 66 Serializing to HDF5 with h5py 3.13 Conclusion 3.14 Exercises 3.15 Summary 68 4.1 4.2 38 4.3 3.1 The world as floating-point numbers 3.2 Tensors: Multidimensional arrays 68 68 Working with images 70 71 3.3 Indexing tensors 3.4 Named tensors 46 3.5 Tensor element types 40 4.4 42 76 Representing tabular data 77 Working with time series Specifying the numeric type with dtype 50 * A dtypefor every occasion 51 · Managing a tensor’s dtype attribute 51 3.6 The tensor API 52 3.7 Tensors: Scenic views of storage 53 54 · Modifying stored values: Inflace Tensor metadata: Size, offset, and stride 55 Views of another tensor’s storage 56· Transposing without copying 58 · Transposing in higher dimensions 60 Contiguous tensors 60 Representing text 93 Managing a tensor’s device attribute 4.6 Conclusion 4.7 Exercises 101 4.8 Summary 102 63 101 The mechanics of learning 103 5.1 A timeless lesson in modeling 5.2 Learning is just parameter estimation 104 106 A hot problem 107 · Gathering some data 107· Visualizing the data 108· Choosing a linear model as a first try 108 5.3 62
78 87 Converting text to numbers 94 · One-hot-encoding characters One-hot encoding whob words 96 · Text embeddings 98 Text embeddings as a blueprint 100 50 Moving tensors to the GPU Loading a specialized format Adding a time dimension 88 · Shaping the data by time period 89 · Ready for training 90 4.5 46 Indexing into storage operations 55 3D images: Volumetric data 75 Using a real-world dataset 77 · Loading a wine data tensor Representing scores 81 · One-hot encoding 81 · When to categorize 83 · Finding thresholds 84 From Python lists to PyTorch tensors 42 · Constructing our first tensors 43 · The essence of tensors 43 3.9 67 Adding color channels 72 · Loading an image file 72 Changing the layout 73 * Normalizing the data 74 It starts with a tensor 39 3.8 65 Real-world data representation using tensors A pre trained network that describes scenes NeuralTalk2 27 ѴІІ Less loss is what we want 109 From problem back to PyTorch 110 94
CONTENTS viii 5.4 5.5 CONTENTS 113 7.4 Exercises 191 Decreasing loss 113 · Getting analytical 114 · Iterating to fit the model 116 · Normalizing inputs 119· Visualizing (again) 122 7.5 Summary 192 Down along the gradient PyTorch’s autograd: Backpropagating all things Using convolutions to generalize C3 123 Computing the gradient automatically 123 · Optimizers a la carte 127 · Training, validation, and overfitting 131 Autograd nits and switching it off 137 5.6 Conclusion 5.7 Exercise 5.8 Summary What convolutions do 8.2 139 8.3 139 141 142 The PyTorch nn module 8.4 Finally a neural network 6.5 Exercises 162 6.6 Summary 163 8.5 196 Subclassing nn.Module 202 207 Training our commet 212 Model design 214 217 Adding memory capacity: Width 218 · Helping our model to converge and generalize: Regularization 219 · Going deeper to learn more complex structures: Depth 223 · Comparing the designs from this section 228 · It’s already outdated 229 152· Returning to the linear 158 Replacing the linear model 158 · Inspecting the parameters Comparing to the linear model 161 Conclusion Convolutions in action Measuring accuracy 214 · Saving and loading our model Training on the GPU 215 151 Using__call__rather than forward model 153 6.4 194 194 Our network as an nn.Module 208 · Haw PyTorch keeps track of parameters and submodules 209· The functional API 210 Composing a multilayer network 144 · Understanding the error function 144 · All we need is activation 145 · More activation functions 147 · Choosing the best activation function 148 What haming means for a neural network 149 6.3 The case for
convolutions 193 Padding the boundary 198 · Detectingfeatures with convolutions 200 · Lookingfurther with depth and pooling Putting it all togetherfor our network 205 a neural network to fit the data 6.2 8.1 139 Artificial neurons IX 159 162 Part 8.6 Conclusion 229 8.7 Exercises 230 8.8 Summary 231 2 Learning from images in the real world: Early detection of lung cancer...,..... ..............233 frf Telling birds from airplanes: Learningfrom images 164 7.1 A dataset of tiny images (j Using PyTorch to fight cancer 165 Downloading CIFAR-10 166 · The Dataset class 166 Dataset transforms 168 · Normalizing data 170 7.2 Distinguishing birds from airplanes 172 Building the dataset 173 · A fully connected model 174 Output of a classifier 175 · Representing the output as probabilities 176 · A loss for classifying 180· Training the classifier 182· The limits ofgoingfully connected 189 7.3 Conclusion 191 235 9.1 Introduction to the use case 236 9.2 Preparing for a large-scale project 9.3 What is a CT scan, exactly? 9.4 The project: An end-to-end detector for lung cancer 237 238 Why can’t we just throw data at a neural network until it works ? 245 · What is a nodule? 249 · Our data source: The LUNA Grand Challenge 251 · Downloading the LUNA data 251 241
CONTENTS 9.5 9.6 Conclusion CONTENTS 11.7 252 Combining data sources into a unified dataset 10.1 Raw CT data files 10.2 Parsing LUNA’s annotation data Loading individual CT scans 256 264 265 A straightforward dataset implementation Iо 271 Caching candidate arrays with the getCtRawCandidate function 274 · Constructing our dataset in LunaDataset .___ init___ 275 · A training/validation split 275 · Rendering Conclusion 10.7 Exercises 10.8 Summary 278 11.12 Exercises 11.13 Summary 316 316 Improving training with metrics and augmentation 318 12.1 High-level plan for improvement 319 12.2 Good dogs vs. bad guys: False positives and false negatives 12.3 Graphing the positives and negatives 12.4 What does an ideal dataset look like? A foundational model and training loop 11.2 The main entry point for our application 11.3 Pretraining setup and initialization Initializing the model and optimizer data loaders 287 320 322 Training and validating the model 301 12.6 345 293 295 Preventing overfitting with data augmentation 346 Specific data augmentation techniques 347 · Seeing the improvement from data augmentation 352 289 300 Revisiting the problem of overfitting An overfit face-to-age prediction model 345 297 · The validation loop is Outputting performance metrics 12.5 284 290 · Thefull model The computeBatchLoss function similar 299 282 334 Making the data look less like the actual and more like the “ideal” 336 Contrasting training with a balanced LunaDataset to previous runs 341 · Recognizing the symptoms of overfitting 343 279 280 285 · Care andfeeding of Our first-pass neural
network design The logMetrics function 315 Recall is Roxie’s strength 324 · Precision is Preston’s forte 326 Implementing precision and recall in logMetrics 327 · Our ultimate performance metric: The FI score 328 · Haw does our model perform with our new metrics ? 332 11.1 11.6 Why isn’t the model learning to detect nodules? Conclusion 316 278 The core convolutions 309 11.10 11.11 277 Training a classification model to detect suspected tumors 11.5 Graphing training metrics with TensorBoard 277 10.6 11.4 11.9 Running TensorBoard 309 · Adding TensorBoard support to the metrics loggingfunction 313 262 Locating a nodule using the patient coordinate system the data Evaluating the model: Getting 99.7% correct means we’re done, right? 308 258 · Unifying our annotation and The patient coordinate system 265 · CT scan shape and voxel sizes 267· Converting between millimeters and voxel addresses 268 · Extracting a nodule from a CT scan 270 10.5 11.8 259 Hounsfield Units 10.4 254 256 Training and validation sets 10.3 304 Needed data for training 305 · Interlude: The enumerateWithEstimate function 306 Summary 253 candidate data Running the training script XI 12.7 Conclusion 12.8 Exercises 12.9 Summary 356 354 355 Using segmentation to find suspected nodules 357 13.1 Adding a second model to our project 13.2 Various types of segmentation 360 358
CONTENTS CONTENTS 13.5 Semantic segmentation: Per-pixei classification The U-Net architecture 13.4 13.5 361 14.8 364 Updating the model for segmentation 366 Adapting an off-the-shelf model to our project 367 Updating the dataset for segmentation 369 Updating the training script for segmentation 14.9 Results 13.8 Conclusion 13.9 Exercises 13.10 Part 3 443 պ Deploying to production 445 $ 15.1 Serving PyTorch models 446 Our model behind a Flask server 446 · What we want from deployment 448 · Request batching 449 386 15.2 Exporting models 455 Interoperability beyond PyTorch with ONNX 455 · PyTorch’sown export: Tracing 456 · Our server with a traced model 458 15.3 Interacting with the PyTorch JIT 458 What to expectfrom moving beyond classic Python/PyTorch 458 The dual nature ofPyTorch as interface and backend 460 TorchScript 461 · Scripting the gaps of traceability 464 401 402 15.4 Summary 402 15.5 Towards the finish line 14.2 Independence of the validation set 407 14.3 Bridging CT segmentation and nodule candidate classification 408 405 Quantitative validation 14.5 Predicting malignancy 417 416 Training, validation, and test sets Going mobile 432 433 What next? Additional sources of inspiration (and data) 465 · C++from the start: The 472 434 Preventing overfitting: Better regularization 434 · Refined training data 437 · Competition results and research papers 438 4 75 15.6 Emerging technology: Enterprise serving of PyTorch models 476 15.7 Conclusion 15.8 Exercises 477 15.9 Summary 477 index Getting malignancy information 417 · An area under the curve baseline: Classifying by
diameter 419 · Reusing preexisting weights: Fine-tuning 422 · More output in TensorBoard 428 What we see when we diagnose 465 Improving efficiency: Model design and quantization Segmentation 410· Grouping voxels into nodule candidates 411 Did wefind a nodule? Classification to reduce fake positives 412 14.4 LibTorch: PyTorch in C++ RunningßTed models from C++ C++API 468 404 14.1 14.7 441 Deployment 399 End-to-end nodule analysis, and where to go next 14.6 Exercises 439 14.10 Summary 441 Initializing our segmentation and augmentation models 387 Usingthe Adam optimizer 388 · Dice loss 389 · Getting images into TensorBoard 392 · Updating our metrics logging 396 Saving our model 397 13.7 439 Behind the curtain U-Net has very specific input size requirements 370 · U-Net trade offs for 3D vs. 2D data 370 * Building the ground truth data 371 · ImplementingLunaždSegmentationDataset 378 Designing our training and validation data 382 · Implementing TrainingLuna2dSegmentationDataset 383 * Augmenting on the GPU 384 13.6 Conclusion xiii 479 477
|
adam_txt |
contents foreword xv preface xvii acknowledgments xix about this book xxi about the authors xxvii about the cover illustration Core PyTorch Part 1 xxviii мннін«мнмммстмнвми»мівмни«иен«« J Introducing deep learning and the PyTorch Library ' 1.1 The deep learning revolution 1.2 PyTorch for deep learning 1.3 Why PyTorch? 3 4 6 7 The deep learning competitive landscape 8 1.4 An overview of how PyTorch supports deep learning projects 10 1.5 Hardware and software requirements UsingJupyter Notebooks 1.6 Exercises 15 1.7 Summary 15 14 13 Л
CONTENTS Pretrained networks 2.1 CONTENTS 16 A pretrained network that recognizes the subject of an image 17 Obtaining a pretrained network for image recognition 19 AlexNet 20 · ResNet 22 ■ Ready, set, almost run 22 Run! 25 2.2 A pretrained model that fakes it until it makes it The GAN game 28 · CycleGAN horses into zebras 30 2.3 29 · A network that turns 33 34 2.4 Torch Hub 35 2.5 Conclusion 37 2.6 Exercises 2.7 Summary 38 3.10 NumPy interoperability 64 3.11 3.12 Generalized tensors are tensors, too Serializing tensors 66 Serializing to HDF5 with h5py 3.13 Conclusion 3.14 Exercises 3.15 Summary 68 4.1 4.2 38 4.3 3.1 The world as floating-point numbers 3.2 Tensors: Multidimensional arrays 68 68 Working with images 70 71 3.3 Indexing tensors 3.4 Named tensors 46 3.5 Tensor element types 40 4.4 42 76 Representing tabular data 77 Working with time series Specifying the numeric type with dtype 50 * A dtypefor every occasion 51 · Managing a tensor’s dtype attribute 51 3.6 The tensor API 52 3.7 Tensors: Scenic views of storage 53 54 · Modifying stored values: Inflace Tensor metadata: Size, offset, and stride 55 Views of another tensor’s storage 56· Transposing without copying 58 · Transposing in higher dimensions 60 Contiguous tensors 60 Representing text 93 Managing a tensor’s device attribute 4.6 Conclusion 4.7 Exercises 101 4.8 Summary 102 63 101 The mechanics of learning 103 5.1 A timeless lesson in modeling 5.2 Learning is just parameter estimation 104 106 A hot problem 107 · Gathering some data 107· Visualizing the data 108· Choosing a linear model as a first try 108 5.3 62
78 87 Converting text to numbers 94 · One-hot-encoding characters One-hot encoding whob words 96 · Text embeddings 98 Text embeddings as a blueprint 100 50 Moving tensors to the GPU Loading a specialized format Adding a time dimension 88 · Shaping the data by time period 89 · Ready for training 90 4.5 46 Indexing into storage operations 55 3D images: Volumetric data 75 Using a real-world dataset 77 · Loading a wine data tensor Representing scores 81 · One-hot encoding 81 · When to categorize 83 · Finding thresholds 84 From Python lists to PyTorch tensors 42 · Constructing our first tensors 43 · The essence of tensors 43 3.9 67 Adding color channels 72 · Loading an image file 72 Changing the layout 73 * Normalizing the data 74 It starts with a tensor 39 3.8 65 Real-world data representation using tensors A pre trained network that describes scenes NeuralTalk2 27 ѴІІ Less loss is what we want 109 From problem back to PyTorch 110 94
CONTENTS viii 5.4 5.5 CONTENTS 113 7.4 Exercises 191 Decreasing loss 113 · Getting analytical 114 · Iterating to fit the model 116 · Normalizing inputs 119· Visualizing (again) 122 7.5 Summary 192 Down along the gradient PyTorch’s autograd: Backpropagating all things Using convolutions to generalize C3 123 Computing the gradient automatically 123 · Optimizers a la carte 127 · Training, validation, and overfitting 131 Autograd nits and switching it off 137 5.6 Conclusion 5.7 Exercise 5.8 Summary What convolutions do 8.2 139 8.3 139 141 142 The PyTorch nn module 8.4 Finally a neural network 6.5 Exercises 162 6.6 Summary 163 8.5 196 Subclassing nn.Module 202 207 Training our commet 212 Model design 214 217 Adding memory capacity: Width 218 · Helping our model to converge and generalize: Regularization 219 · Going deeper to learn more complex structures: Depth 223 · Comparing the designs from this section 228 · It’s already outdated 229 152· Returning to the linear 158 Replacing the linear model 158 · Inspecting the parameters Comparing to the linear model 161 Conclusion Convolutions in action Measuring accuracy 214 · Saving and loading our model Training on the GPU 215 151 Using_call_rather than forward model 153 6.4 194 194 Our network as an nn.Module 208 · Haw PyTorch keeps track of parameters and submodules 209· The functional API 210 Composing a multilayer network 144 · Understanding the error function 144 · All we need is activation 145 · More activation functions 147 · Choosing the best activation function 148 What haming means for a neural network 149 6.3 The case for
convolutions 193 Padding the boundary 198 · Detectingfeatures with convolutions 200 · Lookingfurther with depth and pooling Putting it all togetherfor our network 205 a neural network to fit the data 6.2 8.1 139 Artificial neurons IX 159 162 Part 8.6 Conclusion 229 8.7 Exercises 230 8.8 Summary 231 2 Learning from images in the real world: Early detection of lung cancer.,. .233 frf Telling birds from airplanes: Learningfrom images 164 " 7.1 A dataset of tiny images (j Using PyTorch to fight cancer 165 Downloading CIFAR-10 166 · The Dataset class 166 Dataset transforms 168 · Normalizing data 170 7.2 Distinguishing birds from airplanes 172 Building the dataset 173 · A fully connected model 174 Output of a classifier 175 · Representing the output as probabilities 176 · A loss for classifying 180· Training the classifier 182· The limits ofgoingfully connected 189 7.3 Conclusion 191 235 9.1 Introduction to the use case 236 9.2 Preparing for a large-scale project 9.3 What is a CT scan, exactly? 9.4 The project: An end-to-end detector for lung cancer 237 238 Why can’t we just throw data at a neural network until it works ? 245 · What is a nodule? 249 · Our data source: The LUNA Grand Challenge 251 · Downloading the LUNA data 251 241
CONTENTS 9.5 9.6 Conclusion CONTENTS 11.7 252 Combining data sources into a unified dataset 10.1 Raw CT data files 10.2 Parsing LUNA’s annotation data Loading individual CT scans 256 264 265 A straightforward dataset implementation Iо 271 Caching candidate arrays with the getCtRawCandidate function 274 · Constructing our dataset in LunaDataset ._ init_ 275 · A training/validation split 275 · Rendering Conclusion 10.7 Exercises 10.8 Summary 278 11.12 Exercises 11.13 Summary 316 316 Improving training with metrics and augmentation 318 12.1 High-level plan for improvement 319 12.2 Good dogs vs. bad guys: False positives and false negatives 12.3 Graphing the positives and negatives 12.4 What does an ideal dataset look like? A foundational model and training loop 11.2 The main entry point for our application 11.3 Pretraining setup and initialization Initializing the model and optimizer data loaders 287 320 322 Training and validating the model 301 12.6 345 293 295 Preventing overfitting with data augmentation 346 Specific data augmentation techniques 347 · Seeing the improvement from data augmentation 352 289 300 Revisiting the problem of overfitting An overfit face-to-age prediction model 345 297 · The validation loop is Outputting performance metrics 12.5 284 290 · Thefull model The computeBatchLoss function similar 299 282 334 Making the data look less like the actual and more like the “ideal” 336 Contrasting training with a balanced LunaDataset to previous runs 341 · Recognizing the symptoms of overfitting 343 279 280 285 · Care andfeeding of Our first-pass neural
network design The logMetrics function 315 Recall is Roxie’s strength 324 · Precision is Preston’s forte 326 Implementing precision and recall in logMetrics 327 · Our ultimate performance metric: The FI score 328 · Haw does our model perform with our new metrics ? 332 11.1 11.6 Why isn’t the model learning to detect nodules? Conclusion 316 278 The core convolutions 309 11.10 11.11 277 Training a classification model to detect suspected tumors 11.5 Graphing training metrics with TensorBoard 277 10.6 11.4 11.9 Running TensorBoard 309 · Adding TensorBoard support to the metrics loggingfunction 313 262 Locating a nodule using the patient coordinate system the data Evaluating the model: Getting 99.7% correct means we’re done, right? 308 258 · Unifying our annotation and The patient coordinate system 265 · CT scan shape and voxel sizes 267· Converting between millimeters and voxel addresses 268 · Extracting a nodule from a CT scan 270 10.5 11.8 259 Hounsfield Units 10.4 254 256 Training and validation sets 10.3 304 Needed data for training 305 · Interlude: The enumerateWithEstimate function 306 Summary 253 candidate data Running the training script XI 12.7 Conclusion 12.8 Exercises 12.9 Summary 356 354 355 Using segmentation to find suspected nodules 357 13.1 Adding a second model to our project 13.2 Various types of segmentation 360 358
CONTENTS CONTENTS 13.5 Semantic segmentation: Per-pixei classification The U-Net architecture 13.4 13.5 361 14.8 364 Updating the model for segmentation 366 Adapting an off-the-shelf model to our project 367 Updating the dataset for segmentation 369 Updating the training script for segmentation 14.9 Results 13.8 Conclusion 13.9 Exercises 13.10 Part 3 443 պ Deploying to production 445 $ 15.1 Serving PyTorch models 446 Our model behind a Flask server 446 · What we want from deployment 448 · Request batching 449 386 15.2 Exporting models 455 Interoperability beyond PyTorch with ONNX 455 · PyTorch’sown export: Tracing 456 · Our server with a traced model 458 15.3 Interacting with the PyTorch JIT 458 What to expectfrom moving beyond classic Python/PyTorch 458 The dual nature ofPyTorch as interface and backend 460 TorchScript 461 · Scripting the gaps of traceability 464 401 402 15.4 Summary 402 15.5 Towards the finish line 14.2 Independence of the validation set 407 14.3 Bridging CT segmentation and nodule candidate classification 408 405 Quantitative validation 14.5 Predicting malignancy 417 416 Training, validation, and test sets Going mobile 432 433 What next? Additional sources of inspiration (and data) 465 · C++from the start: The 472 434 Preventing overfitting: Better regularization 434 · Refined training data 437 · Competition results and research papers 438 4 75 15.6 Emerging technology: Enterprise serving of PyTorch models 476 15.7 Conclusion 15.8 Exercises 477 15.9 Summary 477 index Getting malignancy information 417 · An area under the curve baseline: Classifying by
diameter 419 · Reusing preexisting weights: Fine-tuning 422 · More output in TensorBoard 428 What we see when we diagnose 465 Improving efficiency: Model design and quantization Segmentation 410· Grouping voxels into nodule candidates 411 Did wefind a nodule? Classification to reduce fake positives 412 14.4 LibTorch: PyTorch in C++ RunningßTed models from C++ C++API 468 404 14.1 14.7 441 Deployment 399 End-to-end nodule analysis, and where to go next 14.6 Exercises 439 14.10 Summary 441 Initializing our segmentation and augmentation models 387 Usingthe Adam optimizer 388 · Dice loss 389 · Getting images into TensorBoard 392 · Updating our metrics logging 396 Saving our model 397 13.7 439 Behind the curtain U-Net has very specific input size requirements 370 · U-Net trade offs for 3D vs. 2D data 370 * Building the ground truth data 371 · ImplementingLunaždSegmentationDataset 378 Designing our training and validation data 382 · Implementing TrainingLuna2dSegmentationDataset 383 * Augmenting on the GPU 384 13.6 Conclusion xiii 479 477 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Stevens, Eli Antiga, Luca Viehmann, Thomas |
author_GND | (DE-588)125070703X (DE-588)1250708079 (DE-588)1232540455 |
author_facet | Stevens, Eli Antiga, Luca Viehmann, Thomas |
author_role | aut aut aut |
author_sort | Stevens, Eli |
author_variant | e s es l a la t v tv |
building | Verbundindex |
bvnumber | BV047029115 |
classification_rvk | ST 300 |
classification_tum | DAT 366 DAT 708 |
ctrlnum | (OCoLC)1224485196 (DE-599)OBVAC15482304 |
discipline | Informatik |
discipline_str_mv | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01834nam a2200421 c 4500</leader><controlfield tag="001">BV047029115</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20220519 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">201125s2020 a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781617295263</subfield><subfield code="9">978-1-61729-526-3</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1224485196</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)OBVAC15482304</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-573</subfield><subfield code="a">DE-91</subfield><subfield code="a">DE-29T</subfield><subfield code="a">DE-739</subfield><subfield code="a">DE-473</subfield><subfield code="a">DE-1050</subfield><subfield code="a">DE-1029</subfield><subfield code="a">DE-898</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 366</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 708</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Stevens, Eli</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)125070703X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Deep learning with Pytorch</subfield><subfield code="c">Eli Stevens, Luca Antiga, and Thomas Viehmann ; foreword by Soumith Chintala</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Shelter Island</subfield><subfield code="b">Manning Publications</subfield><subfield code="c">[2020]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2020</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xxviii, 490 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Programmbibliothek</subfield><subfield code="0">(DE-588)4121521-7</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Python</subfield><subfield code="g">Programmiersprache</subfield><subfield code="0">(DE-588)4434275-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Programmbibliothek</subfield><subfield code="0">(DE-588)4121521-7</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Python</subfield><subfield code="g">Programmiersprache</subfield><subfield code="0">(DE-588)4434275-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Antiga, Luca</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1250708079</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Viehmann, Thomas</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1232540455</subfield><subfield code="4">aut</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032436436&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032436436</subfield></datafield></record></collection> |
id | DE-604.BV047029115 |
illustrated | Illustrated |
index_date | 2024-07-03T16:01:43Z |
indexdate | 2024-07-10T09:00:35Z |
institution | BVB |
isbn | 9781617295263 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032436436 |
oclc_num | 1224485196 |
open_access_boolean | |
owner | DE-573 DE-91 DE-BY-TUM DE-29T DE-739 DE-473 DE-BY-UBG DE-1050 DE-1029 DE-898 DE-BY-UBR |
owner_facet | DE-573 DE-91 DE-BY-TUM DE-29T DE-739 DE-473 DE-BY-UBG DE-1050 DE-1029 DE-898 DE-BY-UBR |
physical | xxviii, 490 Seiten Illustrationen, Diagramme |
publishDate | 2020 |
publishDateSearch | 2020 |
publishDateSort | 2020 |
publisher | Manning Publications |
record_format | marc |
spelling | Stevens, Eli Verfasser (DE-588)125070703X aut Deep learning with Pytorch Eli Stevens, Luca Antiga, and Thomas Viehmann ; foreword by Soumith Chintala Shelter Island Manning Publications [2020] © 2020 xxviii, 490 Seiten Illustrationen, Diagramme txt rdacontent n rdamedia nc rdacarrier Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Programmbibliothek (DE-588)4121521-7 gnd rswk-swf Python Programmiersprache (DE-588)4434275-5 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 s Programmbibliothek (DE-588)4121521-7 s Python Programmiersprache (DE-588)4434275-5 s DE-604 Antiga, Luca Verfasser (DE-588)1250708079 aut Viehmann, Thomas Verfasser (DE-588)1232540455 aut Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032436436&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Stevens, Eli Antiga, Luca Viehmann, Thomas Deep learning with Pytorch Maschinelles Lernen (DE-588)4193754-5 gnd Programmbibliothek (DE-588)4121521-7 gnd Python Programmiersprache (DE-588)4434275-5 gnd |
subject_GND | (DE-588)4193754-5 (DE-588)4121521-7 (DE-588)4434275-5 |
title | Deep learning with Pytorch |
title_auth | Deep learning with Pytorch |
title_exact_search | Deep learning with Pytorch |
title_exact_search_txtP | Deep learning with Pytorch |
title_full | Deep learning with Pytorch Eli Stevens, Luca Antiga, and Thomas Viehmann ; foreword by Soumith Chintala |
title_fullStr | Deep learning with Pytorch Eli Stevens, Luca Antiga, and Thomas Viehmann ; foreword by Soumith Chintala |
title_full_unstemmed | Deep learning with Pytorch Eli Stevens, Luca Antiga, and Thomas Viehmann ; foreword by Soumith Chintala |
title_short | Deep learning with Pytorch |
title_sort | deep learning with pytorch |
topic | Maschinelles Lernen (DE-588)4193754-5 gnd Programmbibliothek (DE-588)4121521-7 gnd Python Programmiersprache (DE-588)4434275-5 gnd |
topic_facet | Maschinelles Lernen Programmbibliothek Python Programmiersprache |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032436436&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT stevenseli deeplearningwithpytorch AT antigaluca deeplearningwithpytorch AT viehmannthomas deeplearningwithpytorch |