Hands-on neuroevolution with Python: build high-performing artificial neural network architectures using neuroevolution-based algorithms
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Birmingham
Packt Publishing, Limited
2019
Birmingham ; Mumbai Packt December 2019 |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis Klappentext |
Beschreibung: | 353 Seiten Illustrationen |
ISBN: | 9781838824914 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV046617482 | ||
003 | DE-604 | ||
005 | 20200612 | ||
007 | t | ||
008 | 200306s2019 a||| |||| 00||| eng d | ||
020 | |a 9781838824914 |9 978-1-83882-491-4 | ||
035 | |a (OCoLC)1164639289 | ||
035 | |a (DE-599)BVBBV046617482 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-355 | ||
084 | |a ST 250 |0 (DE-625)143626: |2 rvk | ||
100 | 1 | |a Omelianenko, Iaroslav |e Verfasser |4 aut | |
245 | 1 | 0 | |a Hands-on neuroevolution with Python |b build high-performing artificial neural network architectures using neuroevolution-based algorithms |c Iaroslav Omelianenko |
264 | 1 | |a Birmingham |b Packt Publishing, Limited |c 2019 | |
264 | 1 | |a Birmingham ; Mumbai |b Packt |c December 2019 | |
300 | |a 353 Seiten |b Illustrationen | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
650 | 0 | 7 | |a Python |g Programmiersprache |0 (DE-588)4434275-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Künstliche Intelligenz |0 (DE-588)4033447-8 |D s |
689 | 0 | 1 | |a Python |g Programmiersprache |0 (DE-588)4434275-5 |D s |
689 | 0 | |5 DE-604 | |
856 | 4 | 2 | |m Digitalisierung UB Regensburg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032029260&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
856 | 4 | 2 | |m Digitalisierung UB Regensburg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032029260&sequence=000003&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |3 Klappentext |
999 | |a oai:aleph.bib-bvb.de:BVB01-032029260 |
Datensatz im Suchindex
_version_ | 1804181293384073216 |
---|---|
adam_text | Table of Contents Preface _______________ 1 Section 1: Fundamentals of Evolutionary Computation Algorithms and Neuroevolution Methods Chapter 1 : Overview of Neuroevolution Methods Evolutionary algorithms and neuroevolution-based methods Genetic operators Mutation operator Crossover operator Genome encoding schemes Direct genome encoding Indirect genome encoding Coevolution Modularity and hierarchy NEAT algorithm overview NEAT encoding scheme Structural mutations Crossover with an innovation number Spéciation Hypercube-based NEAT Compositional Pattern Producing Networks Substrate configuration Evolving connective CPPNs and the HyperNEAT algorithm Evolvable-Substrate HyperNEAT Information patterns in the hypercube Quadtree as an effective information extractor ES-HyperNEAT algorithm Novelty Search optimization method Novelty Search and natural evolution Novelty metric Summary Further reading Chapter 2: Python Libraries and Environment Setup Suitable Python libraries for neuroevolution experiments NEAT-Python NEAT-Python usage example PyTorch NEAT PyTorch NEAT usage example Mu Iti NEAT 9 ю 11 12 13 14 14 16 16 n 17 18 19 20 22 23 24 25 26 28 28 29 32 34 35 36 38 38 39 40 40 41 42 43 45
Table of Contents MultiNEAT usage example 46 Deep Neuroevolution Comparing Python neuroevolution libraries Environment setup 47 50 51 Pipenv Virtualenv Anaconda 51 52 53 Summary 55 Section 2: Applying Neuroevolution Methods to Solve Classic Computer Science Problems Chapter 3: Using NEAT for XOR Solver Optimization Technical requirements XOR problem basics The objective function for the XOR experiment Hyperparameter selection NEAT section DefaultStagnation section DefaultReproduction section DefaultSpeciesSet section DefauItGenome section XOR experiment hyperparameters Running the XOR experiment Environment setup XOR experiment source code Running the experiment and analyzing the results Exercises Summary Chapter 4: Pole-Balancing Experiments Technical requirements The single-pole balancing problem The equations of motion of the single-pole balancer State equations and control actions The interactions between the solver and the simulator Objective function for a single-pole balancing experiment Cart-pole apparatus simulation The simulation cycle Genome fitness evaluation 59 бо бо 62 63 63 64 65 65 65 67 69 70 71 74 79 82 83 8Յ 84 85 86 87 88 89 _ The single-pole balancing experiment Hyperparameter selection Working environment setup The experiment runner implementation Function to evaluate the fitness of all genomes In the population 90 92 93 93 95 95 96
Table of Contents The experiment runner function Running the single-pole balancing experiment 97 98 Exercises 101 The double-pole balancing problem 101 102 105 105 108 108 The system state and equations of motion Reinforcement signal Initial conditions and state update Control actions Interactions between the solver and the simulator Objective function for a double-pole balancing experiment Double-pole balancing experiment Hyperparameter selection Working environment setup The experiment runner implementation Running the double-pole balancing experiment Exercises Summary Chapter 5: Autonomous Maze Navigation Technical requirements Maze navigation problem Maze simulation environment Maze-navigating agent Maze simulation environment implementation Sensor data generation Agent position update Agents records store The agent record visualization Objective function definition using the fitness score Running the experiment with a simple maze configuration Hyperparameter selection Maze configuration file Working environment setup The experiment runner implementation Genome fitness evaluation Running the simple maze navigation experiment Agent record visualization Exercises Running the experiment with a hard-to-solve maze configuration Hyperparameter selection Working environment setup and experiment runner implementation Running the hard-to-solve maze navigation experiment Exercises Summary Chapter 6: Novelty Search Optimization Method юә 110 110 112 113 114 118 ns 121 122 122 123 124 126 127 129 131 133 133 135 136 138 139 139 141 142 146 148 hs 149 149 150 152 153 155
-------------------------------------------- [Ш] --------------------------------------------
Table of Contents Technical requirements The NS optimization method NS implementation basics Novelty Item NoveltyArchive The fitness function with the novelty score The novelty score The novelty metric Fitness function The population fitness evaluation function The individual fitness evaluation function Experimenting with a simple maze configuration Hyperparameter selection Working environment setup The experiment runner implementation The trials cycle The experiment runner function Running the simple maze navigation experiment withNS optimization Agent record visualization Exercise 1 156 156 157 158 158 160 161 164 165 165 167 170 171 172 172 173 174 177 180 Experimenting with a hard-to-solve maze configuration 182 18З Hyperparameter selection and working environment setup Running the hard-to-solve maze navigation experiment Exercise 2 184 184 187 Summary 188 Section 3: Advanced Neuroevolution Methods Chapter 7: Hypercube-Based NEAT for Visual Discrimination Technical requirements Indirect encoding of ANNs with CPPNs CPPN encoding Hypercube-based NeuroEvolution of Augmenting Topologies Visual discrimination experiment basics Objective function definition Visual discrimination experiment setup 191 192 192 193 194 195 196 Visual discriminator test environment 197 198 Visual field definition Visual discriminator environment 198 200 Experiment runner The experiment runner function Initializing the first CPPN genome population Running the neuroevolution over the specified number of generations Saving the results of the experiment The substrate builder function Fitness
evaluation 205 206 206 206 208 209 211 ------------------------ --------------------- — [iv] -----------------------------------------------
Table of Contents Visual discrimination experiment Hyperparameter selection Working environment setup Running the visual discrimination experiment Exercises Summary Chapter 8: ES-HyperNEAT and the Retina Problem Technical requirements Manual versus evolution-based configuration of the topography of neural nodes Quadtree information extraction and ES-HyperNEAT basics Modular retina problem basics Objective function definition The initial substrate configuration Test environment for the modular retina problem The visual object definition The retina environment definition The function to create a dataset with all the possible visual objects The function to evaluate the detector ANN against two specific visual objects Experiment runner The experiment runner function The substrate builder function Fitness evaluation The eval_genomes function The evaljndividual function Modular retina experiment Exercises Summary Chapter 9: Co-Evolution and the SAFE Method Technical requirements Common co-evolution strategies SAFE method Modified maze experiment The maze-solving agent The maze environment Fitness function definition Fitness function for maze solvers Fitness function for the objective function candidates --------------------------------------------------------- 222 223 226 շշց 230 231 231 232 232 233 235 235 239 241 241 242 243 Hyperparameter selection Working environment setup Running the modular retina experiment The _add_novelty_item function 221 222 228 Modular retina experiment setup Modified Novelty Search 213 213 214 215 219 220 243 244 245 250 250 251 251 252 253 253 254
254 255 256 257 257 258 [V] ----------------------------------------
Table of Contents The evaluate_novelty_score function 259 26q 26o 261 Modified maze experiment implementation Creation of co-evolving populations Creation of the population of the objective function candidates Creating the population of maze solvers The fitness evaluation of the co-evolving populations Fitness evaluation of objective function candidates 262 262 263 263 264 264 265 266 267 268 The evaluate_obj_functions function implementation The evaluateJndivid_obj_function function implementation Fitness evaluation of the maze-solver agents The evaluate_solutions function implementation The evaluate_individual_solution function implementation The evaluate_solution_fitness function implementation The modified maze experiment runner Modified maze experiment Hyperparameters for the maze-solver population Hyperparameters for the objective function candidates population Working environment setup Running the modified maze experiment Exercises Summary Chapter 10: Deep Neuroevolution Technical requirements Deep neuroevolution for deep reinforcement learning Evolving an agent to play the Frostbite Atari game using deep neuroevolution The Frostbite Atari game Game screen mapping into actions Convolutional layers The CNN architecture to train the Atari playing agent The RL training of the game agent The genome encoding scheme Genome encoding scheme definition Genome encoding scheme implementation The simple genetic algorithm . Training an agent to play the Frostbite game Atari Learning Environment The game step function The game observation function The reset Atari environment
function RL evaluation on GPU cores The RLEvalutionWorker class Creating the network graph The graph evaluation loop The asynchronous task runner The ConcurrentWorkers class Creating the evaluation workers Running work tasks and monitoring results ___________ ____________ _____ [vi] 271 271 272 272 273 276 276 277 278 278 280 280 281 281 283 284 284 285 285 286 287 287 288 289 289 290 290 291 292 294 295 295 295 —--------- ———
Table of Contents Experiment runner 297 Experiment configuration file Experiment runner implementation 297 298 Running the Frostbite Atari experiment Setting up the work environment Running the experiment Frostbite visualization Visual inspector for neuroevolution Setting up the work environment Using VINE for experiment visualization Exercises Summary 301 301 302 304 305 305 305 307 307 Section 4: Discussion and Concluding Remarks Chapter 11 : Best Practices, Tips, and Tricks Starting with problem analysis Preprocessing data Data standardization Scaling inputs to a range Data normalization 311 311 312 312 313 313 Understanding the problem domain Writing good simulators Selecting the optimal search optimization method Goal-oriented search optimization Mean squared error Euclidean distance 314 315 315 315 316 316 Novelty Search optimization Advanced visualization Tuning hyperparameters Performance metrics Precision score Recall score F1 score ROC AUC Accuracy Python coding tips and tricks Coding tips and tricks Working environment and programming tools Summary Chapter 12: Concluding Remarks What we learned in this book Overview of the neuroevolution methods Python libraries and environment setup Using NEAT for XOR solver optimization [vii] 317 317 зів 320 320 320 321 321 322 323 323 324 325 327 327 327 330 330
Table of Contents Pole-balancing experiments Autonomous maze navigation Novelty Search optimization method Hypercube-based NEAT for visual discrimination ES-HyperNEAT and the retina problem Co-evolution and the SAFE method Deep Neuroevolution Where to go from here 331 333 334 335 336 337 338 340 340 340 341 341 341 Uber Al Labs alife.org Open-ended evolution at Reddit The NEAT Software Catalog arXiv.org The NEAT algorithm paper 341 Summary Other Books You May Enjoy_____________________________________ 342 Index 347 [ viii ] 343
Neuroevolution is a form of artificial intelligence learning that uses evolutionary algorithms to simplify the process of solving complex tasks in domains such as games, robotics, and the simulation of natural processes. This book will give you comprehensive insights into essential neuroevolution concepts and equip you with the skills you need to apply neuroevolution-based algorithms to solve practical, real-world problems. You ll start with learning the key neuroevolution concepts and methods by writing code with Python. You ll also get hands-on experience with popular Python libraries and cover examples of classical reinforcement learning, path planning for autonomous agents, and developing agents to autonomously play Atari games. Next, you ll learn to solve common and not-so-common challenges in natural computing using neuroevolution-based algorithms. Later, you ll understand how to apply neuroevolution strategies to existing neural network designs to improve training and inference performance. Finally, you ll gain clear insights into the topology of neural networks and how neuroevolution allows you to develop complex networks, starting with simple ones. By the end of this book, you will not only have explored existing neuroevolution-based algorithms, but also have the skills you need to apply them in your research and work assignments. Things you will learn: • Discover the most popular neuroevolution algorithms - NEAT, HyperNEAT, and ES-HyperNEAT • Understand how to examine the results of experiments and analyze algorithm performance • Explore how to implement
neuroevolution-based algorithms in Python • Delve into neuroevolution techniques to improve the performance of existing methods • Get up to speed with advanced visualization tools to examine evolved neural network graphs • Apply deep neuroevolution to develop agents for playing Atari games
|
adam_txt |
Table of Contents Preface _ 1 Section 1: Fundamentals of Evolutionary Computation Algorithms and Neuroevolution Methods Chapter 1 : Overview of Neuroevolution Methods Evolutionary algorithms and neuroevolution-based methods Genetic operators Mutation operator Crossover operator Genome encoding schemes Direct genome encoding Indirect genome encoding Coevolution Modularity and hierarchy NEAT algorithm overview NEAT encoding scheme Structural mutations Crossover with an innovation number Spéciation Hypercube-based NEAT Compositional Pattern Producing Networks Substrate configuration Evolving connective CPPNs and the HyperNEAT algorithm Evolvable-Substrate HyperNEAT Information patterns in the hypercube Quadtree as an effective information extractor ES-HyperNEAT algorithm Novelty Search optimization method Novelty Search and natural evolution Novelty metric Summary Further reading Chapter 2: Python Libraries and Environment Setup Suitable Python libraries for neuroevolution experiments NEAT-Python NEAT-Python usage example PyTorch NEAT PyTorch NEAT usage example Mu Iti NEAT 9 ю 11 12 13 14 14 16 16 n 17 18 19 20 22 23 24 25 26 28 28 29 32 34 35 36 38 38 39 40 40 41 42 43 45
Table of Contents MultiNEAT usage example 46 Deep Neuroevolution Comparing Python neuroevolution libraries Environment setup 47 50 51 Pipenv Virtualenv Anaconda 51 52 53 Summary 55 Section 2: Applying Neuroevolution Methods to Solve Classic Computer Science Problems Chapter 3: Using NEAT for XOR Solver Optimization Technical requirements XOR problem basics The objective function for the XOR experiment Hyperparameter selection NEAT section DefaultStagnation section DefaultReproduction section DefaultSpeciesSet section DefauItGenome section XOR experiment hyperparameters Running the XOR experiment Environment setup XOR experiment source code Running the experiment and analyzing the results Exercises Summary Chapter 4: Pole-Balancing Experiments Technical requirements The single-pole balancing problem The equations of motion of the single-pole balancer State equations and control actions The interactions between the solver and the simulator Objective function for a single-pole balancing experiment Cart-pole apparatus simulation The simulation cycle Genome fitness evaluation 59 бо бо 62 63 63 64 65 65 65 67 69 70 71 74 79 82 83 8Յ 84 85 86 87 88 89 _ The single-pole balancing experiment Hyperparameter selection Working environment setup The experiment runner implementation Function to evaluate the fitness of all genomes In the population 90 92 93 93 95 95 96
Table of Contents The experiment runner function Running the single-pole balancing experiment 97 98 Exercises 101 The double-pole balancing problem 101 102 105 105 108 108 The system state and equations of motion Reinforcement signal Initial conditions and state update Control actions Interactions between the solver and the simulator Objective function for a double-pole balancing experiment Double-pole balancing experiment Hyperparameter selection Working environment setup The experiment runner implementation Running the double-pole balancing experiment Exercises Summary Chapter 5: Autonomous Maze Navigation Technical requirements Maze navigation problem Maze simulation environment Maze-navigating agent Maze simulation environment implementation Sensor data generation Agent position update Agents records store The agent record visualization Objective function definition using the fitness score Running the experiment with a simple maze configuration Hyperparameter selection Maze configuration file Working environment setup The experiment runner implementation Genome fitness evaluation Running the simple maze navigation experiment Agent record visualization Exercises Running the experiment with a hard-to-solve maze configuration Hyperparameter selection Working environment setup and experiment runner implementation Running the hard-to-solve maze navigation experiment Exercises Summary Chapter 6: Novelty Search Optimization Method юә 110 110 112 113 114 118 ns 121 122 122 123 124 126 127 129 131 133 133 135 136 138 139 139 141 142 146 148 hs 149 149 150 152 153 155
-------------------------------------------- [Ш] --------------------------------------------
Table of Contents Technical requirements The NS optimization method NS implementation basics Novelty Item NoveltyArchive The fitness function with the novelty score The novelty score The novelty metric Fitness function The population fitness evaluation function The individual fitness evaluation function Experimenting with a simple maze configuration Hyperparameter selection Working environment setup The experiment runner implementation The trials cycle The experiment runner function Running the simple maze navigation experiment withNS optimization Agent record visualization Exercise 1 156 156 157 158 158 160 161 164 165 165 167 170 171 172 172 173 174 177 180 Experimenting with a hard-to-solve maze configuration 182 18З Hyperparameter selection and working environment setup Running the hard-to-solve maze navigation experiment Exercise 2 184 184 187 Summary 188 Section 3: Advanced Neuroevolution Methods Chapter 7: Hypercube-Based NEAT for Visual Discrimination Technical requirements Indirect encoding of ANNs with CPPNs CPPN encoding Hypercube-based NeuroEvolution of Augmenting Topologies Visual discrimination experiment basics Objective function definition Visual discrimination experiment setup 191 192 192 193 194 195 196 Visual discriminator test environment 197 198 Visual field definition Visual discriminator environment 198 200 Experiment runner The experiment runner function Initializing the first CPPN genome population Running the neuroevolution over the specified number of generations Saving the results of the experiment The substrate builder function Fitness
evaluation 205 206 206 206 208 209 211 ------------------------ --------------------- — [iv] -----------------------------------------------
Table of Contents Visual discrimination experiment Hyperparameter selection Working environment setup Running the visual discrimination experiment Exercises Summary Chapter 8: ES-HyperNEAT and the Retina Problem Technical requirements Manual versus evolution-based configuration of the topography of neural nodes Quadtree information extraction and ES-HyperNEAT basics Modular retina problem basics Objective function definition The initial substrate configuration Test environment for the modular retina problem The visual object definition The retina environment definition The function to create a dataset with all the possible visual objects The function to evaluate the detector ANN against two specific visual objects Experiment runner The experiment runner function The substrate builder function Fitness evaluation The eval_genomes function The evaljndividual function Modular retina experiment Exercises Summary Chapter 9: Co-Evolution and the SAFE Method Technical requirements Common co-evolution strategies SAFE method Modified maze experiment The maze-solving agent The maze environment Fitness function definition Fitness function for maze solvers Fitness function for the objective function candidates --------------------------------------------------------- 222 223 226 շշց 230 231 231 232 232 233 235 235 239 241 241 242 243 Hyperparameter selection Working environment setup Running the modular retina experiment The _add_novelty_item function 221 222 228 Modular retina experiment setup Modified Novelty Search 213 213 214 215 219 220 243 244 245 250 250 251 251 252 253 253 254
254 255 256 257 257 258 [V] ----------------------------------------
Table of Contents The evaluate_novelty_score function 259 26q 26o 261 Modified maze experiment implementation Creation of co-evolving populations Creation of the population of the objective function candidates Creating the population of maze solvers The fitness evaluation of the co-evolving populations Fitness evaluation of objective function candidates 262 262 263 263 264 264 265 266 267 268 The evaluate_obj_functions function implementation The evaluateJndivid_obj_function function implementation Fitness evaluation of the maze-solver agents The evaluate_solutions function implementation The evaluate_individual_solution function implementation The evaluate_solution_fitness function implementation The modified maze experiment runner Modified maze experiment Hyperparameters for the maze-solver population Hyperparameters for the objective function candidates population Working environment setup Running the modified maze experiment Exercises Summary Chapter 10: Deep Neuroevolution Technical requirements Deep neuroevolution for deep reinforcement learning Evolving an agent to play the Frostbite Atari game using deep neuroevolution The Frostbite Atari game Game screen mapping into actions Convolutional layers The CNN architecture to train the Atari playing agent The RL training of the game agent The genome encoding scheme Genome encoding scheme definition Genome encoding scheme implementation The simple genetic algorithm . Training an agent to play the Frostbite game Atari Learning Environment The game step function The game observation function The reset Atari environment
function RL evaluation on GPU cores The RLEvalutionWorker class Creating the network graph The graph evaluation loop The asynchronous task runner The ConcurrentWorkers class Creating the evaluation workers Running work tasks and monitoring results _ _ _ [vi] 271 271 272 272 273 276 276 277 278 278 280 280 281 281 283 284 284 285 285 286 287 287 288 289 289 290 290 291 292 294 295 295 295 —--------- ———
Table of Contents Experiment runner 297 Experiment configuration file Experiment runner implementation 297 298 Running the Frostbite Atari experiment Setting up the work environment Running the experiment Frostbite visualization Visual inspector for neuroevolution Setting up the work environment Using VINE for experiment visualization Exercises Summary 301 301 302 304 305 305 305 307 307 Section 4: Discussion and Concluding Remarks Chapter 11 : Best Practices, Tips, and Tricks Starting with problem analysis Preprocessing data Data standardization Scaling inputs to a range Data normalization 311 311 312 312 313 313 Understanding the problem domain Writing good simulators Selecting the optimal search optimization method Goal-oriented search optimization Mean squared error Euclidean distance 314 315 315 315 316 316 Novelty Search optimization Advanced visualization Tuning hyperparameters Performance metrics Precision score Recall score F1 score ROC AUC Accuracy Python coding tips and tricks Coding tips and tricks Working environment and programming tools Summary Chapter 12: Concluding Remarks What we learned in this book Overview of the neuroevolution methods Python libraries and environment setup Using NEAT for XOR solver optimization [vii] 317 317 зів 320 320 320 321 321 322 323 323 324 325 327 327 327 330 330
Table of Contents Pole-balancing experiments Autonomous maze navigation Novelty Search optimization method Hypercube-based NEAT for visual discrimination ES-HyperNEAT and the retina problem Co-evolution and the SAFE method Deep Neuroevolution Where to go from here 331 333 334 335 336 337 338 340 340 340 341 341 341 Uber Al Labs alife.org Open-ended evolution at Reddit The NEAT Software Catalog arXiv.org The NEAT algorithm paper 341 Summary Other Books You May Enjoy_ 342 Index 347 [ viii ] 343
Neuroevolution is a form of artificial intelligence learning that uses evolutionary algorithms to simplify the process of solving complex tasks in domains such as games, robotics, and the simulation of natural processes. This book will give you comprehensive insights into essential neuroevolution concepts and equip you with the skills you need to apply neuroevolution-based algorithms to solve practical, real-world problems. You'll start with learning the key neuroevolution concepts and methods by writing code with Python. You'll also get hands-on experience with popular Python libraries and cover examples of classical reinforcement learning, path planning for autonomous agents, and developing agents to autonomously play Atari games. Next, you'll learn to solve common and not-so-common challenges in natural computing using neuroevolution-based algorithms. Later, you'll understand how to apply neuroevolution strategies to existing neural network designs to improve training and inference performance. Finally, you'll gain clear insights into the topology of neural networks and how neuroevolution allows you to develop complex networks, starting with simple ones. By the end of this book, you will not only have explored existing neuroevolution-based algorithms, but also have the skills you need to apply them in your research and work assignments. Things you will learn: • Discover the most popular neuroevolution algorithms - NEAT, HyperNEAT, and ES-HyperNEAT • Understand how to examine the results of experiments and analyze algorithm performance • Explore how to implement
neuroevolution-based algorithms in Python • Delve into neuroevolution techniques to improve the performance of existing methods • Get up to speed with advanced visualization tools to examine evolved neural network graphs • Apply deep neuroevolution to develop agents for playing Atari games |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Omelianenko, Iaroslav |
author_facet | Omelianenko, Iaroslav |
author_role | aut |
author_sort | Omelianenko, Iaroslav |
author_variant | i o io |
building | Verbundindex |
bvnumber | BV046617482 |
classification_rvk | ST 250 |
ctrlnum | (OCoLC)1164639289 (DE-599)BVBBV046617482 |
discipline | Informatik |
discipline_str_mv | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01804nam a2200361 c 4500</leader><controlfield tag="001">BV046617482</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20200612 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">200306s2019 a||| |||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781838824914</subfield><subfield code="9">978-1-83882-491-4</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1164639289</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV046617482</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-355</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 250</subfield><subfield code="0">(DE-625)143626:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Omelianenko, Iaroslav</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Hands-on neuroevolution with Python</subfield><subfield code="b">build high-performing artificial neural network architectures using neuroevolution-based algorithms</subfield><subfield code="c">Iaroslav Omelianenko</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham</subfield><subfield code="b">Packt Publishing, Limited</subfield><subfield code="c">2019</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham ; Mumbai</subfield><subfield code="b">Packt</subfield><subfield code="c">December 2019</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">353 Seiten</subfield><subfield code="b">Illustrationen</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Python</subfield><subfield code="g">Programmiersprache</subfield><subfield code="0">(DE-588)4434275-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Künstliche Intelligenz</subfield><subfield code="0">(DE-588)4033447-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Python</subfield><subfield code="g">Programmiersprache</subfield><subfield code="0">(DE-588)4434275-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032029260&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032029260&sequence=000003&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Klappentext</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032029260</subfield></datafield></record></collection> |
id | DE-604.BV046617482 |
illustrated | Illustrated |
index_date | 2024-07-03T14:07:05Z |
indexdate | 2024-07-10T08:49:21Z |
institution | BVB |
isbn | 9781838824914 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032029260 |
oclc_num | 1164639289 |
open_access_boolean | |
owner | DE-355 DE-BY-UBR |
owner_facet | DE-355 DE-BY-UBR |
physical | 353 Seiten Illustrationen |
publishDate | 2019 |
publishDateSearch | 2019 |
publishDateSort | 2019 |
publisher | Packt Publishing, Limited Packt |
record_format | marc |
spelling | Omelianenko, Iaroslav Verfasser aut Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms Iaroslav Omelianenko Birmingham Packt Publishing, Limited 2019 Birmingham ; Mumbai Packt December 2019 353 Seiten Illustrationen txt rdacontent n rdamedia nc rdacarrier Python Programmiersprache (DE-588)4434275-5 gnd rswk-swf Künstliche Intelligenz (DE-588)4033447-8 gnd rswk-swf Künstliche Intelligenz (DE-588)4033447-8 s Python Programmiersprache (DE-588)4434275-5 s DE-604 Digitalisierung UB Regensburg - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032029260&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis Digitalisierung UB Regensburg - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032029260&sequence=000003&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA Klappentext |
spellingShingle | Omelianenko, Iaroslav Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms Python Programmiersprache (DE-588)4434275-5 gnd Künstliche Intelligenz (DE-588)4033447-8 gnd |
subject_GND | (DE-588)4434275-5 (DE-588)4033447-8 |
title | Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms |
title_auth | Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms |
title_exact_search | Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms |
title_exact_search_txtP | Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms |
title_full | Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms Iaroslav Omelianenko |
title_fullStr | Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms Iaroslav Omelianenko |
title_full_unstemmed | Hands-on neuroevolution with Python build high-performing artificial neural network architectures using neuroevolution-based algorithms Iaroslav Omelianenko |
title_short | Hands-on neuroevolution with Python |
title_sort | hands on neuroevolution with python build high performing artificial neural network architectures using neuroevolution based algorithms |
title_sub | build high-performing artificial neural network architectures using neuroevolution-based algorithms |
topic | Python Programmiersprache (DE-588)4434275-5 gnd Künstliche Intelligenz (DE-588)4033447-8 gnd |
topic_facet | Python Programmiersprache Künstliche Intelligenz |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032029260&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032029260&sequence=000003&line_number=0002&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT omelianenkoiaroslav handsonneuroevolutionwithpythonbuildhighperformingartificialneuralnetworkarchitecturesusingneuroevolutionbasedalgorithms |