Shallow and deep learning principles: scientific, philosophical, and logical perspectives
This book discusses Artificial Neural Networks (ANN) and their ability to predict outcomes using deep and shallow learning principles. The author first describes ANN implementation, consisting of at least three layers that must be established together with cells, one of which is input, the other is...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Cham
Springer
[2023]
|
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis |
Zusammenfassung: | This book discusses Artificial Neural Networks (ANN) and their ability to predict outcomes using deep and shallow learning principles. The author first describes ANN implementation, consisting of at least three layers that must be established together with cells, one of which is input, the other is output, and the third is a hidden (intermediate) layer. For this, the author states, it is necessary to develop an architecture that will not model mathematical rules but only the action and response variables that control the event and the reactions that may occur within it. The book explains the reasons and necessity of each ANN model, considering the similarity to the previous methods and the philosophical - logical rules |
Beschreibung: | xx, 661 pages Illustrationen, Diagramme 24 cm |
ISBN: | 9783031295546 3031295544 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV049590590 | ||
003 | DE-604 | ||
005 | 20240410 | ||
007 | t | ||
008 | 240228s2023 a||| b||| 00||| eng d | ||
020 | |a 9783031295546 |9 978-3-031-29554-6 | ||
020 | |a 3031295544 |9 3-031-29554-4 | ||
035 | |a (OCoLC)1429560131 | ||
035 | |a (DE-599)BVBBV049590590 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-739 | ||
084 | |a ST 300 |0 (DE-625)143650: |2 rvk | ||
100 | 1 | |a Şen, Zekâi |d 1947- |e Verfasser |0 (DE-588)140990100 |4 aut | |
245 | 1 | 0 | |a Shallow and deep learning principles |b scientific, philosophical, and logical perspectives |c Zekai Sen |
264 | 1 | |a Cham |b Springer |c [2023] | |
300 | |a xx, 661 pages |b Illustrationen, Diagramme |c 24 cm | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
505 | 8 | |a Introduction -- Philosophical and Logical Principles in Science -- Uncertainty and Modeling Principles -- Mathematical Modeling Principles -- Genetic Algorithm -- Artificial Neural Networks -- Artfcal Intellgence -- Machne Learnng -- Deep Learning -- Conclusion | |
520 | 3 | |a This book discusses Artificial Neural Networks (ANN) and their ability to predict outcomes using deep and shallow learning principles. The author first describes ANN implementation, consisting of at least three layers that must be established together with cells, one of which is input, the other is output, and the third is a hidden (intermediate) layer. For this, the author states, it is necessary to develop an architecture that will not model mathematical rules but only the action and response variables that control the event and the reactions that may occur within it. The book explains the reasons and necessity of each ANN model, considering the similarity to the previous methods and the philosophical - logical rules | |
650 | 0 | 7 | |a Neuronales Netz |0 (DE-588)4226127-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Deep learning |0 (DE-588)1135597375 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
653 | 0 | |a Neural networks (Computer science) | |
653 | 0 | |a Machine learning | |
653 | 0 | |a Réseaux neuronaux (Informatique) | |
653 | 0 | |a Apprentissage automatique | |
653 | 0 | |a Machine learning | |
653 | 0 | |a Neural networks (Computer science) | |
689 | 0 | 0 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | 1 | |a Deep learning |0 (DE-588)1135597375 |D s |
689 | 0 | 2 | |a Neuronales Netz |0 (DE-588)4226127-2 |D s |
689 | 0 | |5 DE-604 | |
776 | 0 | 8 | |i Electronic version |a Sen, Zekai |t Shallow and deep learning principles |d Cham : Springer, [2023] |z 9783031295553 |
856 | 4 | 2 | |m Digitalisierung UB Passau - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034935271&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
Datensatz im Suchindex
_version_ | 1805071318949822464 |
---|---|
adam_text |
Contents 1 Introduction. 1.1 General. 1.2 Historical Developments. 1.3 Information and Knowledge Evolution Stages. 1.4 Determinism Versus Uncertainty. 1.4.1 Randomness. 1.5 Logic. 1.5.1 Bivalent (Crisp) Logic. 1.5.2 Fuzzy Logic. 1.5.3 Bivalent-Fuzzy Distinction. 1.6 Humans, Society, and Technology. 1.7 Education and Uncertainty. 1.8 Future Scientific Methodological Developments. 1.8.1 Shallow Learning. 1.8.2 DeepLearning. 1.8.3 Shallow-Deep Learning Relations. 1.9
Innovation. 1.9.1 Inventions. 1.10 Book Content and Reading Recommendations. 1.11 Conclusions. References. 1 1 2 2 9 11 12 12 13 14 15 17 20 23 24 24 26 27 28 30 32 2 Artificial Intelligence. 2.1 General. 2.2 Artificial Intelligence History. 2.2.1 Before Renaissance. 2.2.2 Recent History. 2.2.3 AI Education. 2.3 Humans and Intelligence. 2.4 Intelligence Types. 33 33 35 35 39 40 41 42 xi
Contents xii Artificial Intelligence Methods. 2.5.1 Rational AI. 2.5.2 Methodical AI. 2.6 Artificial Intelligence Methodologies. 2.7 Natural and Artificial Intelligence Comparison. 2.8 AI Proposal. 2.9 AI in Science and Technology. 2.10 Misuses in Artificial IntelligenceStudies. 2.11 Conclusions. References. 55 56 57 58 59 61 63 63 65 66 Philosophical and Logical Principles in Science. 3.1 General. 3.2 Human Mind. 3.3 Rational Thought Models and Reasoning. 3.3.1 Deductive. 3.3.2 Inductive. 3.3.3
Deductive and InductiveConclusion. 3.3.4 Proportionality Rule. 3.3.5 Shape Rule. 3.4 Philosophy. 3.4.1 Ontology. 3.4.2 Metaphysics. 3.4.3 Epistemology. 3.4.4 Aesthetics. 3.4.5 Ethics. 3.5 Science. 3.5.1 Phenomenological. 3.5.2 Logical Foundation. 3.5.3 Objectivity. 3.5.4 Testability. 3.5.5 Selectivity. 3.5.6 Falsification. 3.5.7 Restrictive Assumptions.
3.6 Science and Philosophy. 3.6.1 Philosophy of Science. 3.6.2 Implications for the Philosophy of Science. 3.7 Logic. 3.7.1 Logic Rules. 3.7.2 Elements of Logic. 3.7.3 Logic Sentences (Propositions). 3.7.4 Propositions and Inferences. 3.7.5 Logic Circuits. 3.7.6 Logic Types. 67 67 69 70 71 72 73 74 74 75 76 77 77 78 78 79 79 79 80 80 81 81 81 84 84 87 91 92 94 95 96 97 100 2.5 3
Contents xiii Sets and Clusters. 3.8.1 Crisp Sets. 3.8.2 Fuzzy Sets. 3.9 Fuzzy Logic Principles. 3.9.1 Fuzziness in Daily Affairs. 3.9.2 Fuzzy Logical Thinking Model. 3.9.3 The Need for Fuzzy Logic. 3.9.4 Mind as the Source of Fuzziness. 3.9.5 Fuzzy Propositions. 3.9.6 Fuzzy Inferences System (FIS). 3.9.7 Fuzzy Modeling Systems. 3.10 Defuzzification. 3.11 Conclusions. References. 103 103 105 117 119 120 121 122 123 127 130 134 138 139 4 Uncertainty and Modeling Principles. 4.1 General. 4.2 Percentages and Probability
Principles. 4.3 Probability Measures and Definitions. 4.3.1 Frequency Definition. 4.3.2 Classical Definition. 4.3.3 Subjective Definition. 4.4 Types of Probability. 4.4.1 Common Probability. 4.4.2 Conditional Probability. 4.4.3 Marginal Probability. 4.5 Axioms of Probability. 4.5.1 Probability Dependence and Independence. 4.5.2 Probability Assumptions. 4.6 Numerical Uncertainties. 4.6.1 Uncertainty Definitions. 4.7 Forecast: Estimation. 4.8 Types of Uncertainty. 4.8.1 Chaotic Uncertainty. 4.9
Indeterminism. 4.10 Uncertainty in Science. 4.11 Importance of Statistics. 4.12 Basic Questions Prior to Data Treatment. 4.13 Simple Probability Principles. 4.14 Statistical Principles. 4.15 Statistical Parameters. 4.15.1 Central Measure Parameters. 4.15.2 Deviation Parameters. 141 141 147 152 152 153 153 153 154 154 155 155 156 158 159 160 161 162 165 166 168 171 173 174 175 175 176 187 3.8
xiv 5 Contents 4.16 Histogram (Percentage Frequency Diagram). 4.16.1 Data Frequency. 4.16.2 Subintervals and Parameters. 4.17 Normal (Gaussian) Test. 4.18 Statistical Model Efficiency Formulations. 4.19 Correlation Coefficient. 4.19.1 Pearson Correlation. 4.19.2 Nonparametric Correlation Coefficient. 4.20 Classical Regression Techniques. 4.20.1 Scatter Diagrams. 4.20.2 Mathematical Linear Regression Model. 4.20.3 Statistical Linear Regression Model. 4.20.4 Least Squares. 4.20.5 Simple Linear Regression Procedure. 4.20.6 Residual Properties. 4.21 Cluster Regression Analysis. 4.21.1 Study Area. 4.21.2 Cluster Regression Model. 4.21.3
Application. 4.22 Trend Identification Methodologies. 4.22.1 Mann-Kendal (MK) Test. 4.22.2 Sen Slope (SS). 4.22.3 Regression Method (RM). 4.22.4 Spearman’s Rho Test (SR). 4.22.5 Pettitt Change Point Test. 4.22.6 Innovative Trend Analysis (ΓΓΑ). 4.23 Future Directions and Recommendations. 4.24 Conclusions. References. 196 197 200 201 204 205 206 210 213 215 216 217 220 221 223 224 225 226 228 234 234 235 235 236 236 237 240 241 241 Mathematical Modeling Principles. 5.1 General. 5.2 Conceptual Models. 5.2.1 Knowledge and Information. 5.2.2 Observation. 5.2.3
Experience. 5.2.4 Audiovisual. 5.3 Mathematics. 5.3.1 Arithmetic Operations. 5.3.2 Logical Relationships. 5.3.3 Equations. 5.4 Geometry and Algebra. 5.5 Modeling Principles. 5.5.1 Square Graph for Model Output Justification. 5.5.2 Model Modification. 5.5.3 Discussions. 245 245 247 247 248 248 248 249 250 251 252 253 255 260 263 264
Contents XV Equation with Logic. 269 5.6.1 Equation by Experiment. 271 5.6.2 Extracting Equations from Data. 272 5.6.3 Extracting Equations from Dimensions. 275 5.7 Logical Mathematical Derivations. 276 5.7.1 Logic Modeling of Electrical Circuits. 278 5.8 Risk and Reliability. 279 5.9 The Logic of Mathematical Functions. 281 5.9.1 Straight Line. 282 5.9.2 Quadratic Curve (Parabola). 284 5.9.3 Cubic Curve. 285 5.9.4 Multi-degree Curve (Polynomial). 287 5.9.5 Equation with Decimal Exponent(Power Function). 287 5.9.6 Exponential Curve. 289 5.9.7 Logarithmic Curve. 292 5.9.8 Double Asymptotic Curve (Hyperbola). 292 5.9.9 Complex Curve. 293 5.10 Mathematics Logic and Language.
294 5.10.1 From Language to Mathematical. 294 5.10.2 From Mathematics to Language. 295 5.11 Mathematical Models. 299 5.11.1 Closed Mathematics Models. 300 5.11.2 Explicit Mathematics Models. 300 5.11.3 Polynomial Models. 304 5.12 Conclusions. 306 Appendix A: VINAM Matlab Software. 307 References. 309 5.6 6 Genetic Algorithm. 6.1 General. 6.2 Decimal Number System. 6.3 Binary Number System. 6.4 Random Numbers. 6.4.1 Random Selections. 6.5 Genetic Numbers. 6.5.1 Genetic Algorithm Data
Structure. 6.6 Methods of Optimization. 6.6.1 Definition and Characteristics of Genetic Algorithms. . . 6.7 Least Minimization Methods. 6.7.1 Completely Systematic Research Method. 6.7.2 Analytical Optimization. 6.7.3 Steepest Ascend (Descent) Method. 6.8 Simulated Annealing (SA) Method. 6.8.1 Application. 311 311 312 315 319 322 326 330 331 332 341 343 348 350 355 357
Contents xvi 6.8.2 Random Optimization Methods. 358 Binary Genetic Algorithms (GA). 359 6.9.1 Benefits and Consequences of Genetic Algorithm (GA). 360 6.9.2 Definition of GAs. 362 6.9.3 Binary GA. 364 6.9.4 Selection of Variables and Target Function. 366 6.9.5 Target Function and Vigor Measurement. 369 6.9.6 Representation of Variables. 370 6.9.7 Initial Population. 373 6.9.8 Selection Process. 375 6.9.9 Selection of Spouses. 376 6.9.10 Crossing. 378 6.9.11 Mutation (Number Change). 384 6.10 Probabilities of GA Transactions. 386 6.11 Gray Codes. 388 6.12 Important Issues Regarding the Behavior of the Method. 391 6.13 Convergence and Schema Concept. 392 6.14 GA Parameters
Selection. 394 6.14.1 Gene Size. 394 6.14.2 Population Size. 394 6.14.3 Example of a Simple GA. 400 6.14.4 Decimal Number-Based Genetic Algorithms. 401 6.14.5 Continuous Variable GA Elements. 402 6.14.6 Variables and Goal Function. 402 6.14.7 Parameter Coding, Accuracy, and Limits. 403 6.14.8 Initial Population. 403 6.15 General Applications. 414 6.15.1 Function Maximizing. 414 6.15.2 Geometric Weight Functions. 417 6.15.3 Classification of Precipitation Condition. 419 6.15.4 Two Independent Datasets. 421 6.16 Conclusions. 426 References. 427 6.9 7 Artificial Neural Networks. 7.1
General. 7.2 Biological Structure. 7.3 ANN Definition and Characteristics. 7.4 History. 7.5 ANN Principles. 7.5.1 ANN Terminology and Usage. 7.5.2 Areas of ANN Use. 7.5.3 Similarity of ANNs to Classic Methods. 7.6 Vector and Matrix Similarity. 429 429 430 431 433 434 435 437 439 441
Contents 7.6.1 Similarity to Kalman Filters. 7.6.2 Similarity to Multiple Regression. 7.6.3 Similarity to Stochastic Processes. 7.6.4 Similarity to Black Box Models. 7.7 ANN Structure. 7.8 Perceptron (Single Linear Sensor). 7.8.1 Perceptron Principles. 7.8.2 Perceptron Architecture. 7.8.3 Perceptron Algorithm. 7.8.4 Perceptron Implementation. 7.9 Single Recurrent Linear Neural Network. 7.9.1 ADALINE Application. 7.9.2 Multi-linear Sensors (MLS). 7.9.3 Multiple Adaptive Linear Element (MADALINE) Neural Network. 7.9.4 ORing Problem. 7.10 Multilayer Artificial Neural Networks and Management Principles. 7.11 ANN Properties. 7.11.1 ANN
Architectures. 7.11.2 Layered ANN. 7.11.3 System Dynamics. 7.12 Activation Functions. 7.12.1 Linear. 7.12.2 Threshold. 7.12.3 Ramp. 7.12.4 Sigmoid. 7.12.5 Hyperbolic. 7.12.6 Gaussian. 7.13 Key Points Before ANN Modeling. 7.13.1 ANN Audit. 7.13.2 ANN Mathematical Calculations. 7.13.3 Training and Modeling with Artificial Networks. 7.13.4 ANN Learning Algorithm. 7.13.5 ANN Education. 7.14 Description of Training Rules. 7.14.1 Supervised
Training. 7.14.2 Unsupervised Training. 7.14.3 Compulsory Supervision. 7.15 Competitive Education. 7.15.1 Semi-teacher Training. 7.15.2 Learning Rule Algorithms. 7.15.3 Back Propagation Algorithm. 7.16 Renovative Oscillation Theory (ROT) ANN. xvii 446 447 449 450 454 455 455 459 462 464 469 471 472 472 473 474 475 477 480 482 484 485 486 486 487 487 489 490 491 493 498 499 500 504 505 507 513 513 514 514 516 526
xviii 8 Contents 7.16.1 Differences of ROT ANN andOthers. 7.16.2 ROT ANN Architecture. 7.16.3 ROT ANN Education. 7.17 ANN with Radial Basis Activation Function. 7.17.1 K-Means. 7.17.2 Radial Basis Activation Function. 7.17.3 RBF ANN Architecture. 7.17.4 RBF ANN Training. 7.18 Recycled Artificial Neural Networks. 7.18.1 Elman ANN. 7.18.2 Elman ANN Training. 7.19 Hopfield ANN. 7.19.1 Discrete Hopfield ANN. 7.19.2 Application. 7.19.3 Continuous Hopfield ANN. 7.20 Simple Competitive Learning Network. 7.20.1 Application. 7.21 Self-Organizing Mapping ANN. 7.21.1 SOM ANN
Training. 7.22 Memory Congruent ANN. 7.22.1 Matrix Fit Memory Method. 7.23 General Applications. 7.23.1 Missing Data Complement Application. 7.23.2 Classification Application. 7.23.3 Temperature Prediction Application. 7.24 Conclusions. References. 527 528 530 537 537 540 543 544 546 546 547 548 550 551 551 552 553 555 556 557 559 561 561 565 568 573 573 Machine Learning. 8.1 General. 8.2 Machine Learning-Related Topics. 8.3 Historical Backgrounds of ML and AI Couple. 8.4 Machine Learning Future. 8.4.1 Dataset. 8.5 Uncertainty Sources and Calculation Methods. 8.5.1 Reduction of
Uncertainties. 8.5.2 Probability Density Functions (PDF). 8.5.3 Confidence Interval. 8.5.4 Uncertainty Roadmap. 8.6 Features. 8.7 Labels. 8.7.1 Numeric Labels: Regression. 8.7.2 Categorical Labels: Classification. 8.7.3 Ordinal Labels. 8.8 Learning Through Applications. 8.9 Forecast Verification. 575 575 578 580 581 582 583 585 586 587 588 588 590 590 591 591 591 592
xix Contents 9 8.9.1 Parametric Validation. 8.9.2 Prediction Skill. 8.9.3 The Contingency Table. 8.10 Learning Methods. 8.10.1 Supervised Learning. 8.10.2 Unsupervised Learning. 8.10.3 Reinforcement Learning. 8.11 Objective and Loss Functions. 8.11.1 Loss Function for Classification. 8.12 Optimization. 8.13 ML and Simple Linear Regression. 8.13.1 Least Square Technique. 8.14 Classification and Categorization. 8.15 Clustering. 8.15.1 Clustering Goal. 8.15.2 Clustering Algorithm. 8.15.3 Cluster Distance Measure. 8.16 k-Means
Clustering. 8.17 Fuzzy c-Means Clustering. 8.18 Frequency-k-Means-c-Means Relationship. 8.19 Dimensionality Reduction. 8.20 Ensemble Methods. 8.21 Neural Nets and Deep Learning. 8.21.1 Learning. 8.22 Conclusions. Appendix: Required Software for Data Reliability Analysis. References. 593 594 595 597 597 598 598 600 601 602 603 604 604 604 606 606 608 609 611 612 612 615 616 616 Deep Learning. 621 9.1 9.2 9.3 9.4 9.5 9.6 General. Deep Learning Methods. 9.2.1 Deep Learning Neural Networks. 9.2.2 Limitations and Challenges. Deep Learning and Machine
Learning. Different Neural Network Architectures. Convolutional Neural Network (CNN). 9.5.1 CNN Foundation. 9.5.2 CNN Network Layers. Activation Functions. 9.6.1 Sigmoid. 9.6.2 Tanh. 9.6.3 Rectifier Linear Unit (ReLU). 9.6.4 Leaky ReLU. 9.6.5 Noisy ReLU. 9.6.6 Parametric Linear Units. 617 617 619 621 622 623 623 624 624 625 626 627 632 632 633 633 635 635 636
Contents XX 9.7 9.8 636 637 637 638 638 639 640 642 642 643 643 643 644 644 644 649 651 651 653 656 656 657 657 Index 659 Fully Connected (FC) Layer. Optimization (Loss) Functions. 9.8.1 Soft-Max Loss Function (Cross-Entropy). 9.8.2 Euclidean Loss Function. 9.8.3 Hinge Loss Function. 9.9 CNN Training Process. 9.10 Parameter Initialization. 9.11 Regularization to CNN. 9.11.1 Dropout. 9.11.2 Drop-Weights. 9.11.3 The i2 Regularization. 9.11.4 The i Regularization. 9.11.5 Early Stopping. 9.12 Recurrent Neural Networks. 9.12.1 RNN Architecture Structure. 9.12.2 RNN Computational Time Sequence. 9.13 The Problem of Long-Term
Dependencies. 9.13.1 LSTM Network. 9.14 Autoencoders. 9.14.1 Deep Convolutional AE (DCAE). 9.15 Natural Language Models. 9.16 Conclusions. References. |
adam_txt |
Contents 1 Introduction. 1.1 General. 1.2 Historical Developments. 1.3 Information and Knowledge Evolution Stages. 1.4 Determinism Versus Uncertainty. 1.4.1 Randomness. 1.5 Logic. 1.5.1 Bivalent (Crisp) Logic. 1.5.2 Fuzzy Logic. 1.5.3 Bivalent-Fuzzy Distinction. 1.6 Humans, Society, and Technology. 1.7 Education and Uncertainty. 1.8 Future Scientific Methodological Developments. 1.8.1 Shallow Learning. 1.8.2 DeepLearning. 1.8.3 Shallow-Deep Learning Relations. 1.9
Innovation. 1.9.1 Inventions. 1.10 Book Content and Reading Recommendations. 1.11 Conclusions. References. 1 1 2 2 9 11 12 12 13 14 15 17 20 23 24 24 26 27 28 30 32 2 Artificial Intelligence. 2.1 General. 2.2 Artificial Intelligence History. 2.2.1 Before Renaissance. 2.2.2 Recent History. 2.2.3 AI Education. 2.3 Humans and Intelligence. 2.4 Intelligence Types. 33 33 35 35 39 40 41 42 xi
Contents xii Artificial Intelligence Methods. 2.5.1 Rational AI. 2.5.2 Methodical AI. 2.6 Artificial Intelligence Methodologies. 2.7 Natural and Artificial Intelligence Comparison. 2.8 AI Proposal. 2.9 AI in Science and Technology. 2.10 Misuses in Artificial IntelligenceStudies. 2.11 Conclusions. References. 55 56 57 58 59 61 63 63 65 66 Philosophical and Logical Principles in Science. 3.1 General. 3.2 Human Mind. 3.3 Rational Thought Models and Reasoning. 3.3.1 Deductive. 3.3.2 Inductive. 3.3.3
Deductive and InductiveConclusion. 3.3.4 Proportionality Rule. 3.3.5 Shape Rule. 3.4 Philosophy. 3.4.1 Ontology. 3.4.2 Metaphysics. 3.4.3 Epistemology. 3.4.4 Aesthetics. 3.4.5 Ethics. 3.5 Science. 3.5.1 Phenomenological. 3.5.2 Logical Foundation. 3.5.3 Objectivity. 3.5.4 Testability. 3.5.5 Selectivity. 3.5.6 Falsification. 3.5.7 Restrictive Assumptions.
3.6 Science and Philosophy. 3.6.1 Philosophy of Science. 3.6.2 Implications for the Philosophy of Science. 3.7 Logic. 3.7.1 Logic Rules. 3.7.2 Elements of Logic. 3.7.3 Logic Sentences (Propositions). 3.7.4 Propositions and Inferences. 3.7.5 Logic Circuits. 3.7.6 Logic Types. 67 67 69 70 71 72 73 74 74 75 76 77 77 78 78 79 79 79 80 80 81 81 81 84 84 87 91 92 94 95 96 97 100 2.5 3
Contents xiii Sets and Clusters. 3.8.1 Crisp Sets. 3.8.2 Fuzzy Sets. 3.9 Fuzzy Logic Principles. 3.9.1 Fuzziness in Daily Affairs. 3.9.2 Fuzzy Logical Thinking Model. 3.9.3 The Need for Fuzzy Logic. 3.9.4 Mind as the Source of Fuzziness. 3.9.5 Fuzzy Propositions. 3.9.6 Fuzzy Inferences System (FIS). 3.9.7 Fuzzy Modeling Systems. 3.10 Defuzzification. 3.11 Conclusions. References. 103 103 105 117 119 120 121 122 123 127 130 134 138 139 4 Uncertainty and Modeling Principles. 4.1 General. 4.2 Percentages and Probability
Principles. 4.3 Probability Measures and Definitions. 4.3.1 Frequency Definition. 4.3.2 Classical Definition. 4.3.3 Subjective Definition. 4.4 Types of Probability. 4.4.1 Common Probability. 4.4.2 Conditional Probability. 4.4.3 Marginal Probability. 4.5 Axioms of Probability. 4.5.1 Probability Dependence and Independence. 4.5.2 Probability Assumptions. 4.6 Numerical Uncertainties. 4.6.1 Uncertainty Definitions. 4.7 Forecast: Estimation. 4.8 Types of Uncertainty. 4.8.1 Chaotic Uncertainty. 4.9
Indeterminism. 4.10 Uncertainty in Science. 4.11 Importance of Statistics. 4.12 Basic Questions Prior to Data Treatment. 4.13 Simple Probability Principles. 4.14 Statistical Principles. 4.15 Statistical Parameters. 4.15.1 Central Measure Parameters. 4.15.2 Deviation Parameters. 141 141 147 152 152 153 153 153 154 154 155 155 156 158 159 160 161 162 165 166 168 171 173 174 175 175 176 187 3.8
xiv 5 Contents 4.16 Histogram (Percentage Frequency Diagram). 4.16.1 Data Frequency. 4.16.2 Subintervals and Parameters. 4.17 Normal (Gaussian) Test. 4.18 Statistical Model Efficiency Formulations. 4.19 Correlation Coefficient. 4.19.1 Pearson Correlation. 4.19.2 Nonparametric Correlation Coefficient. 4.20 Classical Regression Techniques. 4.20.1 Scatter Diagrams. 4.20.2 Mathematical Linear Regression Model. 4.20.3 Statistical Linear Regression Model. 4.20.4 Least Squares. 4.20.5 Simple Linear Regression Procedure. 4.20.6 Residual Properties. 4.21 Cluster Regression Analysis. 4.21.1 Study Area. 4.21.2 Cluster Regression Model. 4.21.3
Application. 4.22 Trend Identification Methodologies. 4.22.1 Mann-Kendal (MK) Test. 4.22.2 Sen Slope (SS). 4.22.3 Regression Method (RM). 4.22.4 Spearman’s Rho Test (SR). 4.22.5 Pettitt Change Point Test. 4.22.6 Innovative Trend Analysis (ΓΓΑ). 4.23 Future Directions and Recommendations. 4.24 Conclusions. References. 196 197 200 201 204 205 206 210 213 215 216 217 220 221 223 224 225 226 228 234 234 235 235 236 236 237 240 241 241 Mathematical Modeling Principles. 5.1 General. 5.2 Conceptual Models. 5.2.1 Knowledge and Information. 5.2.2 Observation. 5.2.3
Experience. 5.2.4 Audiovisual. 5.3 Mathematics. 5.3.1 Arithmetic Operations. 5.3.2 Logical Relationships. 5.3.3 Equations. 5.4 Geometry and Algebra. 5.5 Modeling Principles. 5.5.1 Square Graph for Model Output Justification. 5.5.2 Model Modification. 5.5.3 Discussions. 245 245 247 247 248 248 248 249 250 251 252 253 255 260 263 264
Contents XV Equation with Logic. 269 5.6.1 Equation by Experiment. 271 5.6.2 Extracting Equations from Data. 272 5.6.3 Extracting Equations from Dimensions. 275 5.7 Logical Mathematical Derivations. 276 5.7.1 Logic Modeling of Electrical Circuits. 278 5.8 Risk and Reliability. 279 5.9 The Logic of Mathematical Functions. 281 5.9.1 Straight Line. 282 5.9.2 Quadratic Curve (Parabola). 284 5.9.3 Cubic Curve. 285 5.9.4 Multi-degree Curve (Polynomial). 287 5.9.5 Equation with Decimal Exponent(Power Function). 287 5.9.6 Exponential Curve. 289 5.9.7 Logarithmic Curve. 292 5.9.8 Double Asymptotic Curve (Hyperbola). 292 5.9.9 Complex Curve. 293 5.10 Mathematics Logic and Language.
294 5.10.1 From Language to Mathematical. 294 5.10.2 From Mathematics to Language. 295 5.11 Mathematical Models. 299 5.11.1 Closed Mathematics Models. 300 5.11.2 Explicit Mathematics Models. 300 5.11.3 Polynomial Models. 304 5.12 Conclusions. 306 Appendix A: VINAM Matlab Software. 307 References. 309 5.6 6 Genetic Algorithm. 6.1 General. 6.2 Decimal Number System. 6.3 Binary Number System. 6.4 Random Numbers. 6.4.1 Random Selections. 6.5 Genetic Numbers. 6.5.1 Genetic Algorithm Data
Structure. 6.6 Methods of Optimization. 6.6.1 Definition and Characteristics of Genetic Algorithms. . . 6.7 Least Minimization Methods. 6.7.1 Completely Systematic Research Method. 6.7.2 Analytical Optimization. 6.7.3 Steepest Ascend (Descent) Method. 6.8 Simulated Annealing (SA) Method. 6.8.1 Application. 311 311 312 315 319 322 326 330 331 332 341 343 348 350 355 357
Contents xvi 6.8.2 Random Optimization Methods. 358 Binary Genetic Algorithms (GA). 359 6.9.1 Benefits and Consequences of Genetic Algorithm (GA). 360 6.9.2 Definition of GAs. 362 6.9.3 Binary GA. 364 6.9.4 Selection of Variables and Target Function. 366 6.9.5 Target Function and Vigor Measurement. 369 6.9.6 Representation of Variables. 370 6.9.7 Initial Population. 373 6.9.8 Selection Process. 375 6.9.9 Selection of Spouses. 376 6.9.10 Crossing. 378 6.9.11 Mutation (Number Change). 384 6.10 Probabilities of GA Transactions. 386 6.11 Gray Codes. 388 6.12 Important Issues Regarding the Behavior of the Method. 391 6.13 Convergence and Schema Concept. 392 6.14 GA Parameters
Selection. 394 6.14.1 Gene Size. 394 6.14.2 Population Size. 394 6.14.3 Example of a Simple GA. 400 6.14.4 Decimal Number-Based Genetic Algorithms. 401 6.14.5 Continuous Variable GA Elements. 402 6.14.6 Variables and Goal Function. 402 6.14.7 Parameter Coding, Accuracy, and Limits. 403 6.14.8 Initial Population. 403 6.15 General Applications. 414 6.15.1 Function Maximizing. 414 6.15.2 Geometric Weight Functions. 417 6.15.3 Classification of Precipitation Condition. 419 6.15.4 Two Independent Datasets. 421 6.16 Conclusions. 426 References. 427 6.9 7 Artificial Neural Networks. 7.1
General. 7.2 Biological Structure. 7.3 ANN Definition and Characteristics. 7.4 History. 7.5 ANN Principles. 7.5.1 ANN Terminology and Usage. 7.5.2 Areas of ANN Use. 7.5.3 Similarity of ANNs to Classic Methods. 7.6 Vector and Matrix Similarity. 429 429 430 431 433 434 435 437 439 441
Contents 7.6.1 Similarity to Kalman Filters. 7.6.2 Similarity to Multiple Regression. 7.6.3 Similarity to Stochastic Processes. 7.6.4 Similarity to Black Box Models. 7.7 ANN Structure. 7.8 Perceptron (Single Linear Sensor). 7.8.1 Perceptron Principles. 7.8.2 Perceptron Architecture. 7.8.3 Perceptron Algorithm. 7.8.4 Perceptron Implementation. 7.9 Single Recurrent Linear Neural Network. 7.9.1 ADALINE Application. 7.9.2 Multi-linear Sensors (MLS). 7.9.3 Multiple Adaptive Linear Element (MADALINE) Neural Network. 7.9.4 ORing Problem. 7.10 Multilayer Artificial Neural Networks and Management Principles. 7.11 ANN Properties. 7.11.1 ANN
Architectures. 7.11.2 Layered ANN. 7.11.3 System Dynamics. 7.12 Activation Functions. 7.12.1 Linear. 7.12.2 Threshold. 7.12.3 Ramp. 7.12.4 Sigmoid. 7.12.5 Hyperbolic. 7.12.6 Gaussian. 7.13 Key Points Before ANN Modeling. 7.13.1 ANN Audit. 7.13.2 ANN Mathematical Calculations. 7.13.3 Training and Modeling with Artificial Networks. 7.13.4 ANN Learning Algorithm. 7.13.5 ANN Education. 7.14 Description of Training Rules. 7.14.1 Supervised
Training. 7.14.2 Unsupervised Training. 7.14.3 Compulsory Supervision. 7.15 Competitive Education. 7.15.1 Semi-teacher Training. 7.15.2 Learning Rule Algorithms. 7.15.3 Back Propagation Algorithm. 7.16 Renovative Oscillation Theory (ROT) ANN. xvii 446 447 449 450 454 455 455 459 462 464 469 471 472 472 473 474 475 477 480 482 484 485 486 486 487 487 489 490 491 493 498 499 500 504 505 507 513 513 514 514 516 526
xviii 8 Contents 7.16.1 Differences of ROT ANN andOthers. 7.16.2 ROT ANN Architecture. 7.16.3 ROT ANN Education. 7.17 ANN with Radial Basis Activation Function. 7.17.1 K-Means. 7.17.2 Radial Basis Activation Function. 7.17.3 RBF ANN Architecture. 7.17.4 RBF ANN Training. 7.18 Recycled Artificial Neural Networks. 7.18.1 Elman ANN. 7.18.2 Elman ANN Training. 7.19 Hopfield ANN. 7.19.1 Discrete Hopfield ANN. 7.19.2 Application. 7.19.3 Continuous Hopfield ANN. 7.20 Simple Competitive Learning Network. 7.20.1 Application. 7.21 Self-Organizing Mapping ANN. 7.21.1 SOM ANN
Training. 7.22 Memory Congruent ANN. 7.22.1 Matrix Fit Memory Method. 7.23 General Applications. 7.23.1 Missing Data Complement Application. 7.23.2 Classification Application. 7.23.3 Temperature Prediction Application. 7.24 Conclusions. References. 527 528 530 537 537 540 543 544 546 546 547 548 550 551 551 552 553 555 556 557 559 561 561 565 568 573 573 Machine Learning. 8.1 General. 8.2 Machine Learning-Related Topics. 8.3 Historical Backgrounds of ML and AI Couple. 8.4 Machine Learning Future. 8.4.1 Dataset. 8.5 Uncertainty Sources and Calculation Methods. 8.5.1 Reduction of
Uncertainties. 8.5.2 Probability Density Functions (PDF). 8.5.3 Confidence Interval. 8.5.4 Uncertainty Roadmap. 8.6 Features. 8.7 Labels. 8.7.1 Numeric Labels: Regression. 8.7.2 Categorical Labels: Classification. 8.7.3 Ordinal Labels. 8.8 Learning Through Applications. 8.9 Forecast Verification. 575 575 578 580 581 582 583 585 586 587 588 588 590 590 591 591 591 592
xix Contents 9 8.9.1 Parametric Validation. 8.9.2 Prediction Skill. 8.9.3 The Contingency Table. 8.10 Learning Methods. 8.10.1 Supervised Learning. 8.10.2 Unsupervised Learning. 8.10.3 Reinforcement Learning. 8.11 Objective and Loss Functions. 8.11.1 Loss Function for Classification. 8.12 Optimization. 8.13 ML and Simple Linear Regression. 8.13.1 Least Square Technique. 8.14 Classification and Categorization. 8.15 Clustering. 8.15.1 Clustering Goal. 8.15.2 Clustering Algorithm. 8.15.3 Cluster Distance Measure. 8.16 k-Means
Clustering. 8.17 Fuzzy c-Means Clustering. 8.18 Frequency-k-Means-c-Means Relationship. 8.19 Dimensionality Reduction. 8.20 Ensemble Methods. 8.21 Neural Nets and Deep Learning. 8.21.1 Learning. 8.22 Conclusions. Appendix: Required Software for Data Reliability Analysis. References. 593 594 595 597 597 598 598 600 601 602 603 604 604 604 606 606 608 609 611 612 612 615 616 616 Deep Learning. 621 9.1 9.2 9.3 9.4 9.5 9.6 General. Deep Learning Methods. 9.2.1 Deep Learning Neural Networks. 9.2.2 Limitations and Challenges. Deep Learning and Machine
Learning. Different Neural Network Architectures. Convolutional Neural Network (CNN). 9.5.1 CNN Foundation. 9.5.2 CNN Network Layers. Activation Functions. 9.6.1 Sigmoid. 9.6.2 Tanh. 9.6.3 Rectifier Linear Unit (ReLU). 9.6.4 Leaky ReLU. 9.6.5 Noisy ReLU. 9.6.6 Parametric Linear Units. 617 617 619 621 622 623 623 624 624 625 626 627 632 632 633 633 635 635 636
Contents XX 9.7 9.8 636 637 637 638 638 639 640 642 642 643 643 643 644 644 644 649 651 651 653 656 656 657 657 Index 659 Fully Connected (FC) Layer. Optimization (Loss) Functions. 9.8.1 Soft-Max Loss Function (Cross-Entropy). 9.8.2 Euclidean Loss Function. 9.8.3 Hinge Loss Function. 9.9 CNN Training Process. 9.10 Parameter Initialization. 9.11 Regularization to CNN. 9.11.1 Dropout. 9.11.2 Drop-Weights. 9.11.3 The i2 Regularization. 9.11.4 The i Regularization. 9.11.5 Early Stopping. 9.12 Recurrent Neural Networks. 9.12.1 RNN Architecture Structure. 9.12.2 RNN Computational Time Sequence. 9.13 The Problem of Long-Term
Dependencies. 9.13.1 LSTM Network. 9.14 Autoencoders. 9.14.1 Deep Convolutional AE (DCAE). 9.15 Natural Language Models. 9.16 Conclusions. References. |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Şen, Zekâi 1947- |
author_GND | (DE-588)140990100 |
author_facet | Şen, Zekâi 1947- |
author_role | aut |
author_sort | Şen, Zekâi 1947- |
author_variant | z ş zş |
building | Verbundindex |
bvnumber | BV049590590 |
classification_rvk | ST 300 |
contents | Introduction -- Philosophical and Logical Principles in Science -- Uncertainty and Modeling Principles -- Mathematical Modeling Principles -- Genetic Algorithm -- Artificial Neural Networks -- Artfcal Intellgence -- Machne Learnng -- Deep Learning -- Conclusion |
ctrlnum | (OCoLC)1429560131 (DE-599)BVBBV049590590 |
discipline | Informatik |
discipline_str_mv | Informatik |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nam a2200000 c 4500</leader><controlfield tag="001">BV049590590</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20240410</controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">240228s2023 a||| b||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783031295546</subfield><subfield code="9">978-3-031-29554-6</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">3031295544</subfield><subfield code="9">3-031-29554-4</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1429560131</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV049590590</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-739</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 300</subfield><subfield code="0">(DE-625)143650:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Şen, Zekâi</subfield><subfield code="d">1947-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)140990100</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Shallow and deep learning principles</subfield><subfield code="b">scientific, philosophical, and logical perspectives</subfield><subfield code="c">Zekai Sen</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham</subfield><subfield code="b">Springer</subfield><subfield code="c">[2023]</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xx, 661 pages</subfield><subfield code="b">Illustrationen, Diagramme</subfield><subfield code="c">24 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Introduction -- Philosophical and Logical Principles in Science -- Uncertainty and Modeling Principles -- Mathematical Modeling Principles -- Genetic Algorithm -- Artificial Neural Networks -- Artfcal Intellgence -- Machne Learnng -- Deep Learning -- Conclusion</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">This book discusses Artificial Neural Networks (ANN) and their ability to predict outcomes using deep and shallow learning principles. The author first describes ANN implementation, consisting of at least three layers that must be established together with cells, one of which is input, the other is output, and the third is a hidden (intermediate) layer. For this, the author states, it is necessary to develop an architecture that will not model mathematical rules but only the action and response variables that control the event and the reactions that may occur within it. The book explains the reasons and necessity of each ANN model, considering the similarity to the previous methods and the philosophical - logical rules</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Deep learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Machine learning</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Réseaux neuronaux (Informatique)</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Apprentissage automatique</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Machine learning</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Deep learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Neuronales Netz</subfield><subfield code="0">(DE-588)4226127-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Electronic version</subfield><subfield code="a">Sen, Zekai</subfield><subfield code="t">Shallow and deep learning principles</subfield><subfield code="d">Cham : Springer, [2023]</subfield><subfield code="z">9783031295553</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Passau - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034935271&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield></record></collection> |
id | DE-604.BV049590590 |
illustrated | Illustrated |
index_date | 2024-07-03T23:33:18Z |
indexdate | 2024-07-20T04:35:56Z |
institution | BVB |
isbn | 9783031295546 3031295544 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-034935271 |
oclc_num | 1429560131 |
open_access_boolean | |
owner | DE-739 |
owner_facet | DE-739 |
physical | xx, 661 pages Illustrationen, Diagramme 24 cm |
publishDate | 2023 |
publishDateSearch | 2023 |
publishDateSort | 2023 |
publisher | Springer |
record_format | marc |
spelling | Şen, Zekâi 1947- Verfasser (DE-588)140990100 aut Shallow and deep learning principles scientific, philosophical, and logical perspectives Zekai Sen Cham Springer [2023] xx, 661 pages Illustrationen, Diagramme 24 cm txt rdacontent n rdamedia nc rdacarrier Introduction -- Philosophical and Logical Principles in Science -- Uncertainty and Modeling Principles -- Mathematical Modeling Principles -- Genetic Algorithm -- Artificial Neural Networks -- Artfcal Intellgence -- Machne Learnng -- Deep Learning -- Conclusion This book discusses Artificial Neural Networks (ANN) and their ability to predict outcomes using deep and shallow learning principles. The author first describes ANN implementation, consisting of at least three layers that must be established together with cells, one of which is input, the other is output, and the third is a hidden (intermediate) layer. For this, the author states, it is necessary to develop an architecture that will not model mathematical rules but only the action and response variables that control the event and the reactions that may occur within it. The book explains the reasons and necessity of each ANN model, considering the similarity to the previous methods and the philosophical - logical rules Neuronales Netz (DE-588)4226127-2 gnd rswk-swf Deep learning (DE-588)1135597375 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Neural networks (Computer science) Machine learning Réseaux neuronaux (Informatique) Apprentissage automatique Maschinelles Lernen (DE-588)4193754-5 s Deep learning (DE-588)1135597375 s Neuronales Netz (DE-588)4226127-2 s DE-604 Electronic version Sen, Zekai Shallow and deep learning principles Cham : Springer, [2023] 9783031295553 Digitalisierung UB Passau - ADAM Catalogue Enrichment application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034935271&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Şen, Zekâi 1947- Shallow and deep learning principles scientific, philosophical, and logical perspectives Introduction -- Philosophical and Logical Principles in Science -- Uncertainty and Modeling Principles -- Mathematical Modeling Principles -- Genetic Algorithm -- Artificial Neural Networks -- Artfcal Intellgence -- Machne Learnng -- Deep Learning -- Conclusion Neuronales Netz (DE-588)4226127-2 gnd Deep learning (DE-588)1135597375 gnd Maschinelles Lernen (DE-588)4193754-5 gnd |
subject_GND | (DE-588)4226127-2 (DE-588)1135597375 (DE-588)4193754-5 |
title | Shallow and deep learning principles scientific, philosophical, and logical perspectives |
title_auth | Shallow and deep learning principles scientific, philosophical, and logical perspectives |
title_exact_search | Shallow and deep learning principles scientific, philosophical, and logical perspectives |
title_exact_search_txtP | Shallow and deep learning principles scientific, philosophical, and logical perspectives |
title_full | Shallow and deep learning principles scientific, philosophical, and logical perspectives Zekai Sen |
title_fullStr | Shallow and deep learning principles scientific, philosophical, and logical perspectives Zekai Sen |
title_full_unstemmed | Shallow and deep learning principles scientific, philosophical, and logical perspectives Zekai Sen |
title_short | Shallow and deep learning principles |
title_sort | shallow and deep learning principles scientific philosophical and logical perspectives |
title_sub | scientific, philosophical, and logical perspectives |
topic | Neuronales Netz (DE-588)4226127-2 gnd Deep learning (DE-588)1135597375 gnd Maschinelles Lernen (DE-588)4193754-5 gnd |
topic_facet | Neuronales Netz Deep learning Maschinelles Lernen |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=034935271&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT senzekai shallowanddeeplearningprinciplesscientificphilosophicalandlogicalperspectives |