Deep Learning Techniques for Music Generation:
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Cham
Springer International Publishing AG
2019
|
Schriftenreihe: | Computational Synthesis and Creative Systems Ser
|
Schlagworte: | |
Online-Zugang: | BSB01 |
Beschreibung: | 1 Online-Ressource (303 Seiten) |
ISBN: | 9783319701639 |
Internformat
MARC
LEADER | 00000nmm a2200000 c 4500 | ||
---|---|---|---|
001 | BV048935458 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | cr|uuu---uuuuu | ||
008 | 230509s2019 |||| o||u| ||||||eng d | ||
020 | |a 9783319701639 |q electronic bk. |9 978-3-319-70163-9 | ||
035 | |a (ZDB-1-PQM)EBC5975935 | ||
035 | |a (ZDB-30-PAD)EBC5975935 | ||
035 | |a (ZDB-89-EBL)EBL5975935 | ||
035 | |a (OCoLC)1127931385 | ||
035 | |a (DE-599)BVBBV048935458 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-12 | ||
084 | |a MUS |q DE-12 |2 fid | ||
100 | 1 | |a Briot, Jean-Pierre |e Verfasser |4 aut | |
245 | 1 | 0 | |a Deep Learning Techniques for Music Generation |
264 | 1 | |a Cham |b Springer International Publishing AG |c 2019 | |
264 | 4 | |c ©2020 | |
300 | |a 1 Online-Ressource (303 Seiten) | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 0 | |a Computational Synthesis and Creative Systems Ser | |
505 | 8 | |a Intro -- Preface -- Acknowledgements -- Contents -- List of Tables -- List of Figures -- Acronyms -- Chapter 1 Introduction -- 1.1 Motivation -- 1.1.1 Computer-Based Music Systems -- 1.1.2 Autonomy versus Assistance -- 1.1.3 Symbolic versus Sub-Symbolic AI -- 1.1.4 Deep Learning -- 1.1.5 Present and Future -- 1.2 This Book -- 1.2.1 Other Books and Sources -- 1.2.2 Other Models -- 1.2.3 Deep Learning versus Markov Models -- 1.2.4 Requisites and Roadmap -- 1.2.5 Limits -- Chapter 2 Method -- 2.1 Dimensions -- 2.1.1 Objective -- 2.1.2 Representation -- 2.1.3 Architecture -- 2.1.4 Challenge -- 2.1.5 Strategy -- 2.2 Discussion -- Chapter 3 Objective -- 3.1 Facets -- 3.1.1 Type -- 3.1.2 Destination and Use -- 3.1.3 Mode -- 3.1.4 Style -- Chapter 4 Representation -- 4.1 Phases and Types of Data -- 4.2 Audio versus Symbolic -- 4.3 Audio -- 4.3.1 Waveform -- 4.3.2 Transformed Representations -- 4.3.3 Spectrogram -- 4.3.4 Chromagram -- 4.4 Symbolic -- 4.5 Main Concepts -- 4.5.1 Note -- 4.5.2 Rest -- 4.5.3 Interval -- 4.5.4 Chord -- 4.5.5 Rhythm -- 4.5.5.1 Beat and Meter -- 4.5.5.2 Levels of Rhythm Information -- 4.6 Multivoice/Multitrack -- 4.7 Format -- 4.7.1 MIDI -- 4.7.2 Piano Roll -- 4.7.3 Text -- 4.7.3.1 Melody -- 4.7.3.2 Chord and Polyphony -- 4.7.4 Markup Language -- 4.7.5 Lead Sheet -- 4.8 Temporal Scope and Granularity -- 4.8.1 Temporal Scope -- 4.8.2 Temporal Granularity -- 4.9 Metadata -- 4.9.1 Note Hold/Ending -- 4.9.2 Note Denotation (versus Enharmony) -- 4.9.3 Feature Extraction -- 4.10 Expressiveness -- 4.10.1 Timing -- 4.10.2 Dynamics -- 4.10.3 Audio -- 4.11 Encoding -- 4.11.1 Strategies -- 4.11.2 From One-Hot to Many-Hot and to Multi-One-Hot -- 4.11.3 Summary -- 4.11.4 Binning -- 4.11.5 Pros and Cons -- 4.11.6 Chords -- 4.11.7 Special Hold and Rest Symbols -- 4.11.8 Drums and Percussion -- 4.12 Dataset -- 4.12.1 Transposition and Alignment | |
505 | 8 | |a 4.12.2 Datasets and Libraries -- Chapter 5 Architecture -- 5.1 Introduction to Neural Networks -- 5.1.1 Linear Regression -- 5.1.2 Notations -- 5.1.3 Model Training -- 5.1.4 Gradient Descent Training Algorithm -- 5.1.5 From Model to Architecture -- 5.1.6 From Model to Linear Algebra Representation -- 5.1.7 From Simple to Multivariate Model -- 5.1.8 Activation Function -- 5.2 Basic Building Block -- 5.2.1 Feedforward Computation -- 5.2.2 Computing Multiple Input Data Simultaneously -- 5.3 Machine Learning -- 5.3.1 Definition -- 5.3.2 Categories -- 5.3.3 Components -- 5.3.4 Optimization -- 5.4 Architectures -- 5.5 Multilayer Neural Network aka Feedforward Neural Network -- 5.5.1 Abstract Representation -- 5.5.2 Depth -- 5.5.3 Output Activation Function -- 5.5.4 Cost Function -- 5.5.5 Interpretation -- 5.5.6 Entropy and Cross-Entropy -- 5.5.7 Feedforward Propagation -- 5.5.8 Training -- 5.5.9 Overfitting -- 5.5.10 Regularization -- 5.5.11 Hyperparameters -- 5.5.12 Platforms and Libraries -- 5.6 Autoencoder -- 5.6.1 Sparse Autoencoder -- 5.6.2 Variational Autoencoder -- 5.6.3 Stacked Autoencoder -- 5.7 Restricted Boltzmann Machine (RBM) -- 5.7.1 Training -- 5.7.2 Sampling -- 5.7.3 Types of Variables -- 5.8 Recurrent Neural Network (RNN) -- 5.8.1 Visual Representation -- 5.8.2 Training -- 5.8.3 Long Short-Term Memory (LSTM) -- 5.8.4 Attention Mechanism -- 5.9 Convolutional Architectural Pattern -- 5.9.1 Principles -- 5.9.2 Stages -- 5.9.3 Pooling -- 5.9.4 Multilayer Convolutional Architecture -- 5.9.5 Convolution over Time -- 5.10 Conditioning Architectural Pattern -- 5.11 Generative Adversarial Networks (GAN) Architectural Pattern -- 5.11.1 Challenges -- 5.12 Reinforcement Learning -- 5.13 Compound Architectures -- 5.13.1 Composition Types -- 5.13.2 Bidirectional RNN -- 5.13.3 RNN Encoder-Decoder -- 5.13.4 Variational RNN Encoder-Decoder | |
505 | 8 | |a 5.13.5 Polyphonic Recurrent Networks -- 5.13.6 Further Compound Architectures -- 5.13.7 The Limits of Composition -- Chapter 6 Challenge and Strategy -- 6.1 Notations for Architecture and Representation Dimensions -- 6.2 An Introductory Example -- 6.2.1 Single-Step Feedforward Strategy -- 6.2.2 Example: MiniBach Chorale Counterpoint Accompaniment Symbolic Music Generation System -- 6.2.3 A First Analysis -- 6.3 A Tentative List of Limitations and Challenges -- 6.4 Ex Nihilo Generation -- 6.4.1 Decoder Feedforward -- 6.4.1.1 #1 Example: DeepHear Ragtime Melody Symbolic Music Generation System -- 6.4.1.2 #2 Example: deepAutoController Audio Music Generation System -- 6.4.2 Sampling -- 6.4.2.1 Sampling Basics -- 6.4.2.2 Sampling for Music Generation -- 6.4.2.3 Example: RBM-based Chord Music Generation System -- 6.5 Length Variability -- 6.5.1 Iterative Feedforward -- 6.5.1.1 #1 Example: Blues Chord Sequence Symbolic Music Generation System -- 6.5.1.2 #2 Example: Blues Melody and Chords Symbolic Music Generation System -- 6.6 Content Variability -- 6.6.1 Sampling -- 6.6.1.1 #1 Example: CONCERT Bach Melody Symbolic Music Generation System -- 6.6.1.2 #2 Example: Celtic Melody Symbolic Music Generation System -- 6.7 Expressiveness -- 6.7.1 Example: Performance RNN Piano Polyphony Symbolic Music Generation System -- 6.8 RNN and Iterative Feedforward Revisited -- 6.8.1 #1 Example: Time-Windowed Melody Symbolic Music Generation System -- 6.8.2 #2 Example: Sequential Melody Symbolic Music Generation System -- 6.8.3 #3 Example: BLSTM Chord Accompaniment Symbolic Music Generation System -- 6.8.4 Summary -- 6.9 Melody-Harmony Interaction -- 6.9.1 #1 Example: RNN-RBM Polyphony Symbolic Music Generation System -- 6.9.1.1 Other RNN-RBM Systems -- 6.9.2 #2 Example: Hexahedria Polyphony Symbolic Music Generation Architecture | |
505 | 8 | |a 6.9.3 #3 Example: Bi-Axial LSTM Polyphony Symbolic Music Generation Architecture -- 6.10 Control -- 6.10.1 Dimensions of Control Strategies -- 6.10.2 Sampling -- 6.10.2.1 Sampling for Iterative Feedforward Generation -- 6.10.2.2 Sampling for Incremental Generation -- 6.10.2.3 Sampling for Variational Decoder Feedforward Generation -- 6.10.2.4 Sampling for Adversarial Generation -- 6.10.2.5 Sampling for Other Generation Strategies -- 6.10.3 Conditioning -- 6.10.3.1 #1 Example: Rhythm Symbolic Music Generation System -- 6.10.3.2 #2 Example:WaveNet Speech and Music Audio Generation System -- 6.10.3.3 #3 Example: MidiNet Pop Music Melody Symbolic Music Generation System -- 6.10.3.4 #4 Example: DeepJ Style-Specific Polyphony Symbolic Music Generation System -- 6.10.3.5 #5 Example: Anticipation-RNN Bach Melody Symbolic Music Generation System -- 6.10.3.6 #6 Example: VRASH Melody Symbolic Music Generation System -- 6.10.4 Input Manipulation -- 6.10.4.1 #1 Example: DeepHear Ragtime Counterpoint Symbolic Music Generation System -- 6.10.4.2 Relation to Variational Autoencoders -- 6.10.4.3 #2 Example: Deep Dream Psychedelic Images Generation System -- 6.10.4.4 #3 Example: Style Transfer Painting Generation System -- 6.10.4.5 Style Transfer vs Transfer Learning -- 6.10.4.6 #4 Example: Music Style Transfer -- 6.10.5 Input Manipulation and Sampling -- 6.10.5.1 Example: C-RBM Polyphony Symbolic Music Generation System -- 6.10.6 Reinforcement -- 6.10.6.1 Example: RL-Tuner Melody Symbolic Music Generation System -- 6.10.7 Unit Selection -- 6.10.7.1 Example: Unit Selection and Concatenation Symbolic Melody Generation System -- 6.11 Style Transfer -- 6.11.1 Composition Style Transfer -- 6.11.2 Timbre Style Transfer -- 6.11.2.1 Examples: Audio Timbre Style Transfer Systems -- 6.11.2.2 Limits and Challenges -- 6.11.3 Performance Style Transfer | |
505 | 8 | |a 6.11.4 Example: FlowComposer Composition Support Environment -- 6.12 Structure -- 6.12.1 Example: MusicVAE Multivoice Hierarchical Symbolic Music Generation System -- 6.12.2 Other Temporal Architectural Hierarchies -- 6.13 Originality -- 6.13.1 Conditioning -- 6.13.1.1 Example: MidiNet Melody Generation System -- 6.13.2 Creative Adversarial Networks -- 6.13.2.1 Creative Adversarial Networks Painting Generation System -- 6.14 Incrementality -- 6.14.1 Note Instantiation Strategies -- 6.14.2 Example: DeepBach Chorale Multivoice Symbolic Music Generation System -- 6.15 Interactivity -- 6.15.1 #1 Example: deepAutoController Audio Music Generation System -- 6.15.2 #2 Example: DeepBach Chorale Symbolic Music Generation System -- 6.15.3 Interface Definition -- 6.16 Adaptability -- 6.17 Explainability -- 6.17.1 #1 Example: BachBot Chorale Polyphonic Symbolic Music Generation System -- 6.17.2 #2 Example: deepAutoController Audio Music Generation System -- 6.17.3 Towards Automated Analysis -- 6.18 Discussion -- Chapter 7 Analysis -- 7.1 Referencing and Abbreviations -- 7.2 System Analysis -- 7.3 Correlation Analysis -- Chapter 8 Discussion and Conclusion -- 8.1 Global versus Time Step -- 8.2 Convolution versus Recurrent -- 8.3 Style Transfer and Transfer Learning -- 8.4 Cooperation -- 8.5 Specialization -- 8.6 Evaluation and Creativity -- 8.7 Conclusion -- References -- Glossary -- Index | |
650 | 4 | |a Computer music | |
653 | 6 | |a Electronic books | |
700 | 1 | |a Hadjeres, Gaëtan |e Sonstige |4 oth | |
700 | 1 | |a Pachet, François-David |e Sonstige |4 oth | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |a Briot, Jean-Pierre |t Deep Learning Techniques for Music Generation |d Cham : Springer International Publishing AG,c2019 |z 9783319701622 |
912 | |a ZDB-1-PQM |a ZDB-30-PQE | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-034199324 | ||
966 | e | |u https://ebookcentral.proquest.com/lib/bsmfidmusic/detail.action?docID=5975935 |l BSB01 |p ZDB-1-PQM |q BSB_PDA_PQM |x Aggregator |3 Volltext |
Datensatz im Suchindex
_version_ | 1804185131264507904 |
---|---|
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Briot, Jean-Pierre |
author_facet | Briot, Jean-Pierre |
author_role | aut |
author_sort | Briot, Jean-Pierre |
author_variant | j p b jpb |
building | Verbundindex |
bvnumber | BV048935458 |
collection | ZDB-1-PQM ZDB-30-PQE |
contents | Intro -- Preface -- Acknowledgements -- Contents -- List of Tables -- List of Figures -- Acronyms -- Chapter 1 Introduction -- 1.1 Motivation -- 1.1.1 Computer-Based Music Systems -- 1.1.2 Autonomy versus Assistance -- 1.1.3 Symbolic versus Sub-Symbolic AI -- 1.1.4 Deep Learning -- 1.1.5 Present and Future -- 1.2 This Book -- 1.2.1 Other Books and Sources -- 1.2.2 Other Models -- 1.2.3 Deep Learning versus Markov Models -- 1.2.4 Requisites and Roadmap -- 1.2.5 Limits -- Chapter 2 Method -- 2.1 Dimensions -- 2.1.1 Objective -- 2.1.2 Representation -- 2.1.3 Architecture -- 2.1.4 Challenge -- 2.1.5 Strategy -- 2.2 Discussion -- Chapter 3 Objective -- 3.1 Facets -- 3.1.1 Type -- 3.1.2 Destination and Use -- 3.1.3 Mode -- 3.1.4 Style -- Chapter 4 Representation -- 4.1 Phases and Types of Data -- 4.2 Audio versus Symbolic -- 4.3 Audio -- 4.3.1 Waveform -- 4.3.2 Transformed Representations -- 4.3.3 Spectrogram -- 4.3.4 Chromagram -- 4.4 Symbolic -- 4.5 Main Concepts -- 4.5.1 Note -- 4.5.2 Rest -- 4.5.3 Interval -- 4.5.4 Chord -- 4.5.5 Rhythm -- 4.5.5.1 Beat and Meter -- 4.5.5.2 Levels of Rhythm Information -- 4.6 Multivoice/Multitrack -- 4.7 Format -- 4.7.1 MIDI -- 4.7.2 Piano Roll -- 4.7.3 Text -- 4.7.3.1 Melody -- 4.7.3.2 Chord and Polyphony -- 4.7.4 Markup Language -- 4.7.5 Lead Sheet -- 4.8 Temporal Scope and Granularity -- 4.8.1 Temporal Scope -- 4.8.2 Temporal Granularity -- 4.9 Metadata -- 4.9.1 Note Hold/Ending -- 4.9.2 Note Denotation (versus Enharmony) -- 4.9.3 Feature Extraction -- 4.10 Expressiveness -- 4.10.1 Timing -- 4.10.2 Dynamics -- 4.10.3 Audio -- 4.11 Encoding -- 4.11.1 Strategies -- 4.11.2 From One-Hot to Many-Hot and to Multi-One-Hot -- 4.11.3 Summary -- 4.11.4 Binning -- 4.11.5 Pros and Cons -- 4.11.6 Chords -- 4.11.7 Special Hold and Rest Symbols -- 4.11.8 Drums and Percussion -- 4.12 Dataset -- 4.12.1 Transposition and Alignment 4.12.2 Datasets and Libraries -- Chapter 5 Architecture -- 5.1 Introduction to Neural Networks -- 5.1.1 Linear Regression -- 5.1.2 Notations -- 5.1.3 Model Training -- 5.1.4 Gradient Descent Training Algorithm -- 5.1.5 From Model to Architecture -- 5.1.6 From Model to Linear Algebra Representation -- 5.1.7 From Simple to Multivariate Model -- 5.1.8 Activation Function -- 5.2 Basic Building Block -- 5.2.1 Feedforward Computation -- 5.2.2 Computing Multiple Input Data Simultaneously -- 5.3 Machine Learning -- 5.3.1 Definition -- 5.3.2 Categories -- 5.3.3 Components -- 5.3.4 Optimization -- 5.4 Architectures -- 5.5 Multilayer Neural Network aka Feedforward Neural Network -- 5.5.1 Abstract Representation -- 5.5.2 Depth -- 5.5.3 Output Activation Function -- 5.5.4 Cost Function -- 5.5.5 Interpretation -- 5.5.6 Entropy and Cross-Entropy -- 5.5.7 Feedforward Propagation -- 5.5.8 Training -- 5.5.9 Overfitting -- 5.5.10 Regularization -- 5.5.11 Hyperparameters -- 5.5.12 Platforms and Libraries -- 5.6 Autoencoder -- 5.6.1 Sparse Autoencoder -- 5.6.2 Variational Autoencoder -- 5.6.3 Stacked Autoencoder -- 5.7 Restricted Boltzmann Machine (RBM) -- 5.7.1 Training -- 5.7.2 Sampling -- 5.7.3 Types of Variables -- 5.8 Recurrent Neural Network (RNN) -- 5.8.1 Visual Representation -- 5.8.2 Training -- 5.8.3 Long Short-Term Memory (LSTM) -- 5.8.4 Attention Mechanism -- 5.9 Convolutional Architectural Pattern -- 5.9.1 Principles -- 5.9.2 Stages -- 5.9.3 Pooling -- 5.9.4 Multilayer Convolutional Architecture -- 5.9.5 Convolution over Time -- 5.10 Conditioning Architectural Pattern -- 5.11 Generative Adversarial Networks (GAN) Architectural Pattern -- 5.11.1 Challenges -- 5.12 Reinforcement Learning -- 5.13 Compound Architectures -- 5.13.1 Composition Types -- 5.13.2 Bidirectional RNN -- 5.13.3 RNN Encoder-Decoder -- 5.13.4 Variational RNN Encoder-Decoder 5.13.5 Polyphonic Recurrent Networks -- 5.13.6 Further Compound Architectures -- 5.13.7 The Limits of Composition -- Chapter 6 Challenge and Strategy -- 6.1 Notations for Architecture and Representation Dimensions -- 6.2 An Introductory Example -- 6.2.1 Single-Step Feedforward Strategy -- 6.2.2 Example: MiniBach Chorale Counterpoint Accompaniment Symbolic Music Generation System -- 6.2.3 A First Analysis -- 6.3 A Tentative List of Limitations and Challenges -- 6.4 Ex Nihilo Generation -- 6.4.1 Decoder Feedforward -- 6.4.1.1 #1 Example: DeepHear Ragtime Melody Symbolic Music Generation System -- 6.4.1.2 #2 Example: deepAutoController Audio Music Generation System -- 6.4.2 Sampling -- 6.4.2.1 Sampling Basics -- 6.4.2.2 Sampling for Music Generation -- 6.4.2.3 Example: RBM-based Chord Music Generation System -- 6.5 Length Variability -- 6.5.1 Iterative Feedforward -- 6.5.1.1 #1 Example: Blues Chord Sequence Symbolic Music Generation System -- 6.5.1.2 #2 Example: Blues Melody and Chords Symbolic Music Generation System -- 6.6 Content Variability -- 6.6.1 Sampling -- 6.6.1.1 #1 Example: CONCERT Bach Melody Symbolic Music Generation System -- 6.6.1.2 #2 Example: Celtic Melody Symbolic Music Generation System -- 6.7 Expressiveness -- 6.7.1 Example: Performance RNN Piano Polyphony Symbolic Music Generation System -- 6.8 RNN and Iterative Feedforward Revisited -- 6.8.1 #1 Example: Time-Windowed Melody Symbolic Music Generation System -- 6.8.2 #2 Example: Sequential Melody Symbolic Music Generation System -- 6.8.3 #3 Example: BLSTM Chord Accompaniment Symbolic Music Generation System -- 6.8.4 Summary -- 6.9 Melody-Harmony Interaction -- 6.9.1 #1 Example: RNN-RBM Polyphony Symbolic Music Generation System -- 6.9.1.1 Other RNN-RBM Systems -- 6.9.2 #2 Example: Hexahedria Polyphony Symbolic Music Generation Architecture 6.9.3 #3 Example: Bi-Axial LSTM Polyphony Symbolic Music Generation Architecture -- 6.10 Control -- 6.10.1 Dimensions of Control Strategies -- 6.10.2 Sampling -- 6.10.2.1 Sampling for Iterative Feedforward Generation -- 6.10.2.2 Sampling for Incremental Generation -- 6.10.2.3 Sampling for Variational Decoder Feedforward Generation -- 6.10.2.4 Sampling for Adversarial Generation -- 6.10.2.5 Sampling for Other Generation Strategies -- 6.10.3 Conditioning -- 6.10.3.1 #1 Example: Rhythm Symbolic Music Generation System -- 6.10.3.2 #2 Example:WaveNet Speech and Music Audio Generation System -- 6.10.3.3 #3 Example: MidiNet Pop Music Melody Symbolic Music Generation System -- 6.10.3.4 #4 Example: DeepJ Style-Specific Polyphony Symbolic Music Generation System -- 6.10.3.5 #5 Example: Anticipation-RNN Bach Melody Symbolic Music Generation System -- 6.10.3.6 #6 Example: VRASH Melody Symbolic Music Generation System -- 6.10.4 Input Manipulation -- 6.10.4.1 #1 Example: DeepHear Ragtime Counterpoint Symbolic Music Generation System -- 6.10.4.2 Relation to Variational Autoencoders -- 6.10.4.3 #2 Example: Deep Dream Psychedelic Images Generation System -- 6.10.4.4 #3 Example: Style Transfer Painting Generation System -- 6.10.4.5 Style Transfer vs Transfer Learning -- 6.10.4.6 #4 Example: Music Style Transfer -- 6.10.5 Input Manipulation and Sampling -- 6.10.5.1 Example: C-RBM Polyphony Symbolic Music Generation System -- 6.10.6 Reinforcement -- 6.10.6.1 Example: RL-Tuner Melody Symbolic Music Generation System -- 6.10.7 Unit Selection -- 6.10.7.1 Example: Unit Selection and Concatenation Symbolic Melody Generation System -- 6.11 Style Transfer -- 6.11.1 Composition Style Transfer -- 6.11.2 Timbre Style Transfer -- 6.11.2.1 Examples: Audio Timbre Style Transfer Systems -- 6.11.2.2 Limits and Challenges -- 6.11.3 Performance Style Transfer 6.11.4 Example: FlowComposer Composition Support Environment -- 6.12 Structure -- 6.12.1 Example: MusicVAE Multivoice Hierarchical Symbolic Music Generation System -- 6.12.2 Other Temporal Architectural Hierarchies -- 6.13 Originality -- 6.13.1 Conditioning -- 6.13.1.1 Example: MidiNet Melody Generation System -- 6.13.2 Creative Adversarial Networks -- 6.13.2.1 Creative Adversarial Networks Painting Generation System -- 6.14 Incrementality -- 6.14.1 Note Instantiation Strategies -- 6.14.2 Example: DeepBach Chorale Multivoice Symbolic Music Generation System -- 6.15 Interactivity -- 6.15.1 #1 Example: deepAutoController Audio Music Generation System -- 6.15.2 #2 Example: DeepBach Chorale Symbolic Music Generation System -- 6.15.3 Interface Definition -- 6.16 Adaptability -- 6.17 Explainability -- 6.17.1 #1 Example: BachBot Chorale Polyphonic Symbolic Music Generation System -- 6.17.2 #2 Example: deepAutoController Audio Music Generation System -- 6.17.3 Towards Automated Analysis -- 6.18 Discussion -- Chapter 7 Analysis -- 7.1 Referencing and Abbreviations -- 7.2 System Analysis -- 7.3 Correlation Analysis -- Chapter 8 Discussion and Conclusion -- 8.1 Global versus Time Step -- 8.2 Convolution versus Recurrent -- 8.3 Style Transfer and Transfer Learning -- 8.4 Cooperation -- 8.5 Specialization -- 8.6 Evaluation and Creativity -- 8.7 Conclusion -- References -- Glossary -- Index |
ctrlnum | (ZDB-1-PQM)EBC5975935 (ZDB-30-PAD)EBC5975935 (ZDB-89-EBL)EBL5975935 (OCoLC)1127931385 (DE-599)BVBBV048935458 |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>10451nmm a2200469 c 4500</leader><controlfield tag="001">BV048935458</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">230509s2019 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783319701639</subfield><subfield code="q">electronic bk.</subfield><subfield code="9">978-3-319-70163-9</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-1-PQM)EBC5975935</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-30-PAD)EBC5975935</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-89-EBL)EBL5975935</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1127931385</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV048935458</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-12</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">MUS</subfield><subfield code="q">DE-12</subfield><subfield code="2">fid</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Briot, Jean-Pierre</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Deep Learning Techniques for Music Generation</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham</subfield><subfield code="b">Springer International Publishing AG</subfield><subfield code="c">2019</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©2020</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (303 Seiten)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Computational Synthesis and Creative Systems Ser</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Intro -- Preface -- Acknowledgements -- Contents -- List of Tables -- List of Figures -- Acronyms -- Chapter 1 Introduction -- 1.1 Motivation -- 1.1.1 Computer-Based Music Systems -- 1.1.2 Autonomy versus Assistance -- 1.1.3 Symbolic versus Sub-Symbolic AI -- 1.1.4 Deep Learning -- 1.1.5 Present and Future -- 1.2 This Book -- 1.2.1 Other Books and Sources -- 1.2.2 Other Models -- 1.2.3 Deep Learning versus Markov Models -- 1.2.4 Requisites and Roadmap -- 1.2.5 Limits -- Chapter 2 Method -- 2.1 Dimensions -- 2.1.1 Objective -- 2.1.2 Representation -- 2.1.3 Architecture -- 2.1.4 Challenge -- 2.1.5 Strategy -- 2.2 Discussion -- Chapter 3 Objective -- 3.1 Facets -- 3.1.1 Type -- 3.1.2 Destination and Use -- 3.1.3 Mode -- 3.1.4 Style -- Chapter 4 Representation -- 4.1 Phases and Types of Data -- 4.2 Audio versus Symbolic -- 4.3 Audio -- 4.3.1 Waveform -- 4.3.2 Transformed Representations -- 4.3.3 Spectrogram -- 4.3.4 Chromagram -- 4.4 Symbolic -- 4.5 Main Concepts -- 4.5.1 Note -- 4.5.2 Rest -- 4.5.3 Interval -- 4.5.4 Chord -- 4.5.5 Rhythm -- 4.5.5.1 Beat and Meter -- 4.5.5.2 Levels of Rhythm Information -- 4.6 Multivoice/Multitrack -- 4.7 Format -- 4.7.1 MIDI -- 4.7.2 Piano Roll -- 4.7.3 Text -- 4.7.3.1 Melody -- 4.7.3.2 Chord and Polyphony -- 4.7.4 Markup Language -- 4.7.5 Lead Sheet -- 4.8 Temporal Scope and Granularity -- 4.8.1 Temporal Scope -- 4.8.2 Temporal Granularity -- 4.9 Metadata -- 4.9.1 Note Hold/Ending -- 4.9.2 Note Denotation (versus Enharmony) -- 4.9.3 Feature Extraction -- 4.10 Expressiveness -- 4.10.1 Timing -- 4.10.2 Dynamics -- 4.10.3 Audio -- 4.11 Encoding -- 4.11.1 Strategies -- 4.11.2 From One-Hot to Many-Hot and to Multi-One-Hot -- 4.11.3 Summary -- 4.11.4 Binning -- 4.11.5 Pros and Cons -- 4.11.6 Chords -- 4.11.7 Special Hold and Rest Symbols -- 4.11.8 Drums and Percussion -- 4.12 Dataset -- 4.12.1 Transposition and Alignment</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">4.12.2 Datasets and Libraries -- Chapter 5 Architecture -- 5.1 Introduction to Neural Networks -- 5.1.1 Linear Regression -- 5.1.2 Notations -- 5.1.3 Model Training -- 5.1.4 Gradient Descent Training Algorithm -- 5.1.5 From Model to Architecture -- 5.1.6 From Model to Linear Algebra Representation -- 5.1.7 From Simple to Multivariate Model -- 5.1.8 Activation Function -- 5.2 Basic Building Block -- 5.2.1 Feedforward Computation -- 5.2.2 Computing Multiple Input Data Simultaneously -- 5.3 Machine Learning -- 5.3.1 Definition -- 5.3.2 Categories -- 5.3.3 Components -- 5.3.4 Optimization -- 5.4 Architectures -- 5.5 Multilayer Neural Network aka Feedforward Neural Network -- 5.5.1 Abstract Representation -- 5.5.2 Depth -- 5.5.3 Output Activation Function -- 5.5.4 Cost Function -- 5.5.5 Interpretation -- 5.5.6 Entropy and Cross-Entropy -- 5.5.7 Feedforward Propagation -- 5.5.8 Training -- 5.5.9 Overfitting -- 5.5.10 Regularization -- 5.5.11 Hyperparameters -- 5.5.12 Platforms and Libraries -- 5.6 Autoencoder -- 5.6.1 Sparse Autoencoder -- 5.6.2 Variational Autoencoder -- 5.6.3 Stacked Autoencoder -- 5.7 Restricted Boltzmann Machine (RBM) -- 5.7.1 Training -- 5.7.2 Sampling -- 5.7.3 Types of Variables -- 5.8 Recurrent Neural Network (RNN) -- 5.8.1 Visual Representation -- 5.8.2 Training -- 5.8.3 Long Short-Term Memory (LSTM) -- 5.8.4 Attention Mechanism -- 5.9 Convolutional Architectural Pattern -- 5.9.1 Principles -- 5.9.2 Stages -- 5.9.3 Pooling -- 5.9.4 Multilayer Convolutional Architecture -- 5.9.5 Convolution over Time -- 5.10 Conditioning Architectural Pattern -- 5.11 Generative Adversarial Networks (GAN) Architectural Pattern -- 5.11.1 Challenges -- 5.12 Reinforcement Learning -- 5.13 Compound Architectures -- 5.13.1 Composition Types -- 5.13.2 Bidirectional RNN -- 5.13.3 RNN Encoder-Decoder -- 5.13.4 Variational RNN Encoder-Decoder</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">5.13.5 Polyphonic Recurrent Networks -- 5.13.6 Further Compound Architectures -- 5.13.7 The Limits of Composition -- Chapter 6 Challenge and Strategy -- 6.1 Notations for Architecture and Representation Dimensions -- 6.2 An Introductory Example -- 6.2.1 Single-Step Feedforward Strategy -- 6.2.2 Example: MiniBach Chorale Counterpoint Accompaniment Symbolic Music Generation System -- 6.2.3 A First Analysis -- 6.3 A Tentative List of Limitations and Challenges -- 6.4 Ex Nihilo Generation -- 6.4.1 Decoder Feedforward -- 6.4.1.1 #1 Example: DeepHear Ragtime Melody Symbolic Music Generation System -- 6.4.1.2 #2 Example: deepAutoController Audio Music Generation System -- 6.4.2 Sampling -- 6.4.2.1 Sampling Basics -- 6.4.2.2 Sampling for Music Generation -- 6.4.2.3 Example: RBM-based Chord Music Generation System -- 6.5 Length Variability -- 6.5.1 Iterative Feedforward -- 6.5.1.1 #1 Example: Blues Chord Sequence Symbolic Music Generation System -- 6.5.1.2 #2 Example: Blues Melody and Chords Symbolic Music Generation System -- 6.6 Content Variability -- 6.6.1 Sampling -- 6.6.1.1 #1 Example: CONCERT Bach Melody Symbolic Music Generation System -- 6.6.1.2 #2 Example: Celtic Melody Symbolic Music Generation System -- 6.7 Expressiveness -- 6.7.1 Example: Performance RNN Piano Polyphony Symbolic Music Generation System -- 6.8 RNN and Iterative Feedforward Revisited -- 6.8.1 #1 Example: Time-Windowed Melody Symbolic Music Generation System -- 6.8.2 #2 Example: Sequential Melody Symbolic Music Generation System -- 6.8.3 #3 Example: BLSTM Chord Accompaniment Symbolic Music Generation System -- 6.8.4 Summary -- 6.9 Melody-Harmony Interaction -- 6.9.1 #1 Example: RNN-RBM Polyphony Symbolic Music Generation System -- 6.9.1.1 Other RNN-RBM Systems -- 6.9.2 #2 Example: Hexahedria Polyphony Symbolic Music Generation Architecture</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">6.9.3 #3 Example: Bi-Axial LSTM Polyphony Symbolic Music Generation Architecture -- 6.10 Control -- 6.10.1 Dimensions of Control Strategies -- 6.10.2 Sampling -- 6.10.2.1 Sampling for Iterative Feedforward Generation -- 6.10.2.2 Sampling for Incremental Generation -- 6.10.2.3 Sampling for Variational Decoder Feedforward Generation -- 6.10.2.4 Sampling for Adversarial Generation -- 6.10.2.5 Sampling for Other Generation Strategies -- 6.10.3 Conditioning -- 6.10.3.1 #1 Example: Rhythm Symbolic Music Generation System -- 6.10.3.2 #2 Example:WaveNet Speech and Music Audio Generation System -- 6.10.3.3 #3 Example: MidiNet Pop Music Melody Symbolic Music Generation System -- 6.10.3.4 #4 Example: DeepJ Style-Specific Polyphony Symbolic Music Generation System -- 6.10.3.5 #5 Example: Anticipation-RNN Bach Melody Symbolic Music Generation System -- 6.10.3.6 #6 Example: VRASH Melody Symbolic Music Generation System -- 6.10.4 Input Manipulation -- 6.10.4.1 #1 Example: DeepHear Ragtime Counterpoint Symbolic Music Generation System -- 6.10.4.2 Relation to Variational Autoencoders -- 6.10.4.3 #2 Example: Deep Dream Psychedelic Images Generation System -- 6.10.4.4 #3 Example: Style Transfer Painting Generation System -- 6.10.4.5 Style Transfer vs Transfer Learning -- 6.10.4.6 #4 Example: Music Style Transfer -- 6.10.5 Input Manipulation and Sampling -- 6.10.5.1 Example: C-RBM Polyphony Symbolic Music Generation System -- 6.10.6 Reinforcement -- 6.10.6.1 Example: RL-Tuner Melody Symbolic Music Generation System -- 6.10.7 Unit Selection -- 6.10.7.1 Example: Unit Selection and Concatenation Symbolic Melody Generation System -- 6.11 Style Transfer -- 6.11.1 Composition Style Transfer -- 6.11.2 Timbre Style Transfer -- 6.11.2.1 Examples: Audio Timbre Style Transfer Systems -- 6.11.2.2 Limits and Challenges -- 6.11.3 Performance Style Transfer</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">6.11.4 Example: FlowComposer Composition Support Environment -- 6.12 Structure -- 6.12.1 Example: MusicVAE Multivoice Hierarchical Symbolic Music Generation System -- 6.12.2 Other Temporal Architectural Hierarchies -- 6.13 Originality -- 6.13.1 Conditioning -- 6.13.1.1 Example: MidiNet Melody Generation System -- 6.13.2 Creative Adversarial Networks -- 6.13.2.1 Creative Adversarial Networks Painting Generation System -- 6.14 Incrementality -- 6.14.1 Note Instantiation Strategies -- 6.14.2 Example: DeepBach Chorale Multivoice Symbolic Music Generation System -- 6.15 Interactivity -- 6.15.1 #1 Example: deepAutoController Audio Music Generation System -- 6.15.2 #2 Example: DeepBach Chorale Symbolic Music Generation System -- 6.15.3 Interface Definition -- 6.16 Adaptability -- 6.17 Explainability -- 6.17.1 #1 Example: BachBot Chorale Polyphonic Symbolic Music Generation System -- 6.17.2 #2 Example: deepAutoController Audio Music Generation System -- 6.17.3 Towards Automated Analysis -- 6.18 Discussion -- Chapter 7 Analysis -- 7.1 Referencing and Abbreviations -- 7.2 System Analysis -- 7.3 Correlation Analysis -- Chapter 8 Discussion and Conclusion -- 8.1 Global versus Time Step -- 8.2 Convolution versus Recurrent -- 8.3 Style Transfer and Transfer Learning -- 8.4 Cooperation -- 8.5 Specialization -- 8.6 Evaluation and Creativity -- 8.7 Conclusion -- References -- Glossary -- Index</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer music</subfield></datafield><datafield tag="653" ind1=" " ind2="6"><subfield code="a">Electronic books</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Hadjeres, Gaëtan</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Pachet, François-David</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="a">Briot, Jean-Pierre</subfield><subfield code="t">Deep Learning Techniques for Music Generation</subfield><subfield code="d">Cham : Springer International Publishing AG,c2019</subfield><subfield code="z">9783319701622</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-1-PQM</subfield><subfield code="a">ZDB-30-PQE</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-034199324</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://ebookcentral.proquest.com/lib/bsmfidmusic/detail.action?docID=5975935</subfield><subfield code="l">BSB01</subfield><subfield code="p">ZDB-1-PQM</subfield><subfield code="q">BSB_PDA_PQM</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV048935458 |
illustrated | Not Illustrated |
index_date | 2024-07-03T21:58:11Z |
indexdate | 2024-07-10T09:50:21Z |
institution | BVB |
isbn | 9783319701639 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-034199324 |
oclc_num | 1127931385 |
open_access_boolean | |
owner | DE-12 |
owner_facet | DE-12 |
physical | 1 Online-Ressource (303 Seiten) |
psigel | ZDB-1-PQM ZDB-30-PQE ZDB-1-PQM BSB_PDA_PQM |
publishDate | 2019 |
publishDateSearch | 2019 |
publishDateSort | 2019 |
publisher | Springer International Publishing AG |
record_format | marc |
series2 | Computational Synthesis and Creative Systems Ser |
spelling | Briot, Jean-Pierre Verfasser aut Deep Learning Techniques for Music Generation Cham Springer International Publishing AG 2019 ©2020 1 Online-Ressource (303 Seiten) txt rdacontent c rdamedia cr rdacarrier Computational Synthesis and Creative Systems Ser Intro -- Preface -- Acknowledgements -- Contents -- List of Tables -- List of Figures -- Acronyms -- Chapter 1 Introduction -- 1.1 Motivation -- 1.1.1 Computer-Based Music Systems -- 1.1.2 Autonomy versus Assistance -- 1.1.3 Symbolic versus Sub-Symbolic AI -- 1.1.4 Deep Learning -- 1.1.5 Present and Future -- 1.2 This Book -- 1.2.1 Other Books and Sources -- 1.2.2 Other Models -- 1.2.3 Deep Learning versus Markov Models -- 1.2.4 Requisites and Roadmap -- 1.2.5 Limits -- Chapter 2 Method -- 2.1 Dimensions -- 2.1.1 Objective -- 2.1.2 Representation -- 2.1.3 Architecture -- 2.1.4 Challenge -- 2.1.5 Strategy -- 2.2 Discussion -- Chapter 3 Objective -- 3.1 Facets -- 3.1.1 Type -- 3.1.2 Destination and Use -- 3.1.3 Mode -- 3.1.4 Style -- Chapter 4 Representation -- 4.1 Phases and Types of Data -- 4.2 Audio versus Symbolic -- 4.3 Audio -- 4.3.1 Waveform -- 4.3.2 Transformed Representations -- 4.3.3 Spectrogram -- 4.3.4 Chromagram -- 4.4 Symbolic -- 4.5 Main Concepts -- 4.5.1 Note -- 4.5.2 Rest -- 4.5.3 Interval -- 4.5.4 Chord -- 4.5.5 Rhythm -- 4.5.5.1 Beat and Meter -- 4.5.5.2 Levels of Rhythm Information -- 4.6 Multivoice/Multitrack -- 4.7 Format -- 4.7.1 MIDI -- 4.7.2 Piano Roll -- 4.7.3 Text -- 4.7.3.1 Melody -- 4.7.3.2 Chord and Polyphony -- 4.7.4 Markup Language -- 4.7.5 Lead Sheet -- 4.8 Temporal Scope and Granularity -- 4.8.1 Temporal Scope -- 4.8.2 Temporal Granularity -- 4.9 Metadata -- 4.9.1 Note Hold/Ending -- 4.9.2 Note Denotation (versus Enharmony) -- 4.9.3 Feature Extraction -- 4.10 Expressiveness -- 4.10.1 Timing -- 4.10.2 Dynamics -- 4.10.3 Audio -- 4.11 Encoding -- 4.11.1 Strategies -- 4.11.2 From One-Hot to Many-Hot and to Multi-One-Hot -- 4.11.3 Summary -- 4.11.4 Binning -- 4.11.5 Pros and Cons -- 4.11.6 Chords -- 4.11.7 Special Hold and Rest Symbols -- 4.11.8 Drums and Percussion -- 4.12 Dataset -- 4.12.1 Transposition and Alignment 4.12.2 Datasets and Libraries -- Chapter 5 Architecture -- 5.1 Introduction to Neural Networks -- 5.1.1 Linear Regression -- 5.1.2 Notations -- 5.1.3 Model Training -- 5.1.4 Gradient Descent Training Algorithm -- 5.1.5 From Model to Architecture -- 5.1.6 From Model to Linear Algebra Representation -- 5.1.7 From Simple to Multivariate Model -- 5.1.8 Activation Function -- 5.2 Basic Building Block -- 5.2.1 Feedforward Computation -- 5.2.2 Computing Multiple Input Data Simultaneously -- 5.3 Machine Learning -- 5.3.1 Definition -- 5.3.2 Categories -- 5.3.3 Components -- 5.3.4 Optimization -- 5.4 Architectures -- 5.5 Multilayer Neural Network aka Feedforward Neural Network -- 5.5.1 Abstract Representation -- 5.5.2 Depth -- 5.5.3 Output Activation Function -- 5.5.4 Cost Function -- 5.5.5 Interpretation -- 5.5.6 Entropy and Cross-Entropy -- 5.5.7 Feedforward Propagation -- 5.5.8 Training -- 5.5.9 Overfitting -- 5.5.10 Regularization -- 5.5.11 Hyperparameters -- 5.5.12 Platforms and Libraries -- 5.6 Autoencoder -- 5.6.1 Sparse Autoencoder -- 5.6.2 Variational Autoencoder -- 5.6.3 Stacked Autoencoder -- 5.7 Restricted Boltzmann Machine (RBM) -- 5.7.1 Training -- 5.7.2 Sampling -- 5.7.3 Types of Variables -- 5.8 Recurrent Neural Network (RNN) -- 5.8.1 Visual Representation -- 5.8.2 Training -- 5.8.3 Long Short-Term Memory (LSTM) -- 5.8.4 Attention Mechanism -- 5.9 Convolutional Architectural Pattern -- 5.9.1 Principles -- 5.9.2 Stages -- 5.9.3 Pooling -- 5.9.4 Multilayer Convolutional Architecture -- 5.9.5 Convolution over Time -- 5.10 Conditioning Architectural Pattern -- 5.11 Generative Adversarial Networks (GAN) Architectural Pattern -- 5.11.1 Challenges -- 5.12 Reinforcement Learning -- 5.13 Compound Architectures -- 5.13.1 Composition Types -- 5.13.2 Bidirectional RNN -- 5.13.3 RNN Encoder-Decoder -- 5.13.4 Variational RNN Encoder-Decoder 5.13.5 Polyphonic Recurrent Networks -- 5.13.6 Further Compound Architectures -- 5.13.7 The Limits of Composition -- Chapter 6 Challenge and Strategy -- 6.1 Notations for Architecture and Representation Dimensions -- 6.2 An Introductory Example -- 6.2.1 Single-Step Feedforward Strategy -- 6.2.2 Example: MiniBach Chorale Counterpoint Accompaniment Symbolic Music Generation System -- 6.2.3 A First Analysis -- 6.3 A Tentative List of Limitations and Challenges -- 6.4 Ex Nihilo Generation -- 6.4.1 Decoder Feedforward -- 6.4.1.1 #1 Example: DeepHear Ragtime Melody Symbolic Music Generation System -- 6.4.1.2 #2 Example: deepAutoController Audio Music Generation System -- 6.4.2 Sampling -- 6.4.2.1 Sampling Basics -- 6.4.2.2 Sampling for Music Generation -- 6.4.2.3 Example: RBM-based Chord Music Generation System -- 6.5 Length Variability -- 6.5.1 Iterative Feedforward -- 6.5.1.1 #1 Example: Blues Chord Sequence Symbolic Music Generation System -- 6.5.1.2 #2 Example: Blues Melody and Chords Symbolic Music Generation System -- 6.6 Content Variability -- 6.6.1 Sampling -- 6.6.1.1 #1 Example: CONCERT Bach Melody Symbolic Music Generation System -- 6.6.1.2 #2 Example: Celtic Melody Symbolic Music Generation System -- 6.7 Expressiveness -- 6.7.1 Example: Performance RNN Piano Polyphony Symbolic Music Generation System -- 6.8 RNN and Iterative Feedforward Revisited -- 6.8.1 #1 Example: Time-Windowed Melody Symbolic Music Generation System -- 6.8.2 #2 Example: Sequential Melody Symbolic Music Generation System -- 6.8.3 #3 Example: BLSTM Chord Accompaniment Symbolic Music Generation System -- 6.8.4 Summary -- 6.9 Melody-Harmony Interaction -- 6.9.1 #1 Example: RNN-RBM Polyphony Symbolic Music Generation System -- 6.9.1.1 Other RNN-RBM Systems -- 6.9.2 #2 Example: Hexahedria Polyphony Symbolic Music Generation Architecture 6.9.3 #3 Example: Bi-Axial LSTM Polyphony Symbolic Music Generation Architecture -- 6.10 Control -- 6.10.1 Dimensions of Control Strategies -- 6.10.2 Sampling -- 6.10.2.1 Sampling for Iterative Feedforward Generation -- 6.10.2.2 Sampling for Incremental Generation -- 6.10.2.3 Sampling for Variational Decoder Feedforward Generation -- 6.10.2.4 Sampling for Adversarial Generation -- 6.10.2.5 Sampling for Other Generation Strategies -- 6.10.3 Conditioning -- 6.10.3.1 #1 Example: Rhythm Symbolic Music Generation System -- 6.10.3.2 #2 Example:WaveNet Speech and Music Audio Generation System -- 6.10.3.3 #3 Example: MidiNet Pop Music Melody Symbolic Music Generation System -- 6.10.3.4 #4 Example: DeepJ Style-Specific Polyphony Symbolic Music Generation System -- 6.10.3.5 #5 Example: Anticipation-RNN Bach Melody Symbolic Music Generation System -- 6.10.3.6 #6 Example: VRASH Melody Symbolic Music Generation System -- 6.10.4 Input Manipulation -- 6.10.4.1 #1 Example: DeepHear Ragtime Counterpoint Symbolic Music Generation System -- 6.10.4.2 Relation to Variational Autoencoders -- 6.10.4.3 #2 Example: Deep Dream Psychedelic Images Generation System -- 6.10.4.4 #3 Example: Style Transfer Painting Generation System -- 6.10.4.5 Style Transfer vs Transfer Learning -- 6.10.4.6 #4 Example: Music Style Transfer -- 6.10.5 Input Manipulation and Sampling -- 6.10.5.1 Example: C-RBM Polyphony Symbolic Music Generation System -- 6.10.6 Reinforcement -- 6.10.6.1 Example: RL-Tuner Melody Symbolic Music Generation System -- 6.10.7 Unit Selection -- 6.10.7.1 Example: Unit Selection and Concatenation Symbolic Melody Generation System -- 6.11 Style Transfer -- 6.11.1 Composition Style Transfer -- 6.11.2 Timbre Style Transfer -- 6.11.2.1 Examples: Audio Timbre Style Transfer Systems -- 6.11.2.2 Limits and Challenges -- 6.11.3 Performance Style Transfer 6.11.4 Example: FlowComposer Composition Support Environment -- 6.12 Structure -- 6.12.1 Example: MusicVAE Multivoice Hierarchical Symbolic Music Generation System -- 6.12.2 Other Temporal Architectural Hierarchies -- 6.13 Originality -- 6.13.1 Conditioning -- 6.13.1.1 Example: MidiNet Melody Generation System -- 6.13.2 Creative Adversarial Networks -- 6.13.2.1 Creative Adversarial Networks Painting Generation System -- 6.14 Incrementality -- 6.14.1 Note Instantiation Strategies -- 6.14.2 Example: DeepBach Chorale Multivoice Symbolic Music Generation System -- 6.15 Interactivity -- 6.15.1 #1 Example: deepAutoController Audio Music Generation System -- 6.15.2 #2 Example: DeepBach Chorale Symbolic Music Generation System -- 6.15.3 Interface Definition -- 6.16 Adaptability -- 6.17 Explainability -- 6.17.1 #1 Example: BachBot Chorale Polyphonic Symbolic Music Generation System -- 6.17.2 #2 Example: deepAutoController Audio Music Generation System -- 6.17.3 Towards Automated Analysis -- 6.18 Discussion -- Chapter 7 Analysis -- 7.1 Referencing and Abbreviations -- 7.2 System Analysis -- 7.3 Correlation Analysis -- Chapter 8 Discussion and Conclusion -- 8.1 Global versus Time Step -- 8.2 Convolution versus Recurrent -- 8.3 Style Transfer and Transfer Learning -- 8.4 Cooperation -- 8.5 Specialization -- 8.6 Evaluation and Creativity -- 8.7 Conclusion -- References -- Glossary -- Index Computer music Electronic books Hadjeres, Gaëtan Sonstige oth Pachet, François-David Sonstige oth Erscheint auch als Druck-Ausgabe Briot, Jean-Pierre Deep Learning Techniques for Music Generation Cham : Springer International Publishing AG,c2019 9783319701622 |
spellingShingle | Briot, Jean-Pierre Deep Learning Techniques for Music Generation Intro -- Preface -- Acknowledgements -- Contents -- List of Tables -- List of Figures -- Acronyms -- Chapter 1 Introduction -- 1.1 Motivation -- 1.1.1 Computer-Based Music Systems -- 1.1.2 Autonomy versus Assistance -- 1.1.3 Symbolic versus Sub-Symbolic AI -- 1.1.4 Deep Learning -- 1.1.5 Present and Future -- 1.2 This Book -- 1.2.1 Other Books and Sources -- 1.2.2 Other Models -- 1.2.3 Deep Learning versus Markov Models -- 1.2.4 Requisites and Roadmap -- 1.2.5 Limits -- Chapter 2 Method -- 2.1 Dimensions -- 2.1.1 Objective -- 2.1.2 Representation -- 2.1.3 Architecture -- 2.1.4 Challenge -- 2.1.5 Strategy -- 2.2 Discussion -- Chapter 3 Objective -- 3.1 Facets -- 3.1.1 Type -- 3.1.2 Destination and Use -- 3.1.3 Mode -- 3.1.4 Style -- Chapter 4 Representation -- 4.1 Phases and Types of Data -- 4.2 Audio versus Symbolic -- 4.3 Audio -- 4.3.1 Waveform -- 4.3.2 Transformed Representations -- 4.3.3 Spectrogram -- 4.3.4 Chromagram -- 4.4 Symbolic -- 4.5 Main Concepts -- 4.5.1 Note -- 4.5.2 Rest -- 4.5.3 Interval -- 4.5.4 Chord -- 4.5.5 Rhythm -- 4.5.5.1 Beat and Meter -- 4.5.5.2 Levels of Rhythm Information -- 4.6 Multivoice/Multitrack -- 4.7 Format -- 4.7.1 MIDI -- 4.7.2 Piano Roll -- 4.7.3 Text -- 4.7.3.1 Melody -- 4.7.3.2 Chord and Polyphony -- 4.7.4 Markup Language -- 4.7.5 Lead Sheet -- 4.8 Temporal Scope and Granularity -- 4.8.1 Temporal Scope -- 4.8.2 Temporal Granularity -- 4.9 Metadata -- 4.9.1 Note Hold/Ending -- 4.9.2 Note Denotation (versus Enharmony) -- 4.9.3 Feature Extraction -- 4.10 Expressiveness -- 4.10.1 Timing -- 4.10.2 Dynamics -- 4.10.3 Audio -- 4.11 Encoding -- 4.11.1 Strategies -- 4.11.2 From One-Hot to Many-Hot and to Multi-One-Hot -- 4.11.3 Summary -- 4.11.4 Binning -- 4.11.5 Pros and Cons -- 4.11.6 Chords -- 4.11.7 Special Hold and Rest Symbols -- 4.11.8 Drums and Percussion -- 4.12 Dataset -- 4.12.1 Transposition and Alignment 4.12.2 Datasets and Libraries -- Chapter 5 Architecture -- 5.1 Introduction to Neural Networks -- 5.1.1 Linear Regression -- 5.1.2 Notations -- 5.1.3 Model Training -- 5.1.4 Gradient Descent Training Algorithm -- 5.1.5 From Model to Architecture -- 5.1.6 From Model to Linear Algebra Representation -- 5.1.7 From Simple to Multivariate Model -- 5.1.8 Activation Function -- 5.2 Basic Building Block -- 5.2.1 Feedforward Computation -- 5.2.2 Computing Multiple Input Data Simultaneously -- 5.3 Machine Learning -- 5.3.1 Definition -- 5.3.2 Categories -- 5.3.3 Components -- 5.3.4 Optimization -- 5.4 Architectures -- 5.5 Multilayer Neural Network aka Feedforward Neural Network -- 5.5.1 Abstract Representation -- 5.5.2 Depth -- 5.5.3 Output Activation Function -- 5.5.4 Cost Function -- 5.5.5 Interpretation -- 5.5.6 Entropy and Cross-Entropy -- 5.5.7 Feedforward Propagation -- 5.5.8 Training -- 5.5.9 Overfitting -- 5.5.10 Regularization -- 5.5.11 Hyperparameters -- 5.5.12 Platforms and Libraries -- 5.6 Autoencoder -- 5.6.1 Sparse Autoencoder -- 5.6.2 Variational Autoencoder -- 5.6.3 Stacked Autoencoder -- 5.7 Restricted Boltzmann Machine (RBM) -- 5.7.1 Training -- 5.7.2 Sampling -- 5.7.3 Types of Variables -- 5.8 Recurrent Neural Network (RNN) -- 5.8.1 Visual Representation -- 5.8.2 Training -- 5.8.3 Long Short-Term Memory (LSTM) -- 5.8.4 Attention Mechanism -- 5.9 Convolutional Architectural Pattern -- 5.9.1 Principles -- 5.9.2 Stages -- 5.9.3 Pooling -- 5.9.4 Multilayer Convolutional Architecture -- 5.9.5 Convolution over Time -- 5.10 Conditioning Architectural Pattern -- 5.11 Generative Adversarial Networks (GAN) Architectural Pattern -- 5.11.1 Challenges -- 5.12 Reinforcement Learning -- 5.13 Compound Architectures -- 5.13.1 Composition Types -- 5.13.2 Bidirectional RNN -- 5.13.3 RNN Encoder-Decoder -- 5.13.4 Variational RNN Encoder-Decoder 5.13.5 Polyphonic Recurrent Networks -- 5.13.6 Further Compound Architectures -- 5.13.7 The Limits of Composition -- Chapter 6 Challenge and Strategy -- 6.1 Notations for Architecture and Representation Dimensions -- 6.2 An Introductory Example -- 6.2.1 Single-Step Feedforward Strategy -- 6.2.2 Example: MiniBach Chorale Counterpoint Accompaniment Symbolic Music Generation System -- 6.2.3 A First Analysis -- 6.3 A Tentative List of Limitations and Challenges -- 6.4 Ex Nihilo Generation -- 6.4.1 Decoder Feedforward -- 6.4.1.1 #1 Example: DeepHear Ragtime Melody Symbolic Music Generation System -- 6.4.1.2 #2 Example: deepAutoController Audio Music Generation System -- 6.4.2 Sampling -- 6.4.2.1 Sampling Basics -- 6.4.2.2 Sampling for Music Generation -- 6.4.2.3 Example: RBM-based Chord Music Generation System -- 6.5 Length Variability -- 6.5.1 Iterative Feedforward -- 6.5.1.1 #1 Example: Blues Chord Sequence Symbolic Music Generation System -- 6.5.1.2 #2 Example: Blues Melody and Chords Symbolic Music Generation System -- 6.6 Content Variability -- 6.6.1 Sampling -- 6.6.1.1 #1 Example: CONCERT Bach Melody Symbolic Music Generation System -- 6.6.1.2 #2 Example: Celtic Melody Symbolic Music Generation System -- 6.7 Expressiveness -- 6.7.1 Example: Performance RNN Piano Polyphony Symbolic Music Generation System -- 6.8 RNN and Iterative Feedforward Revisited -- 6.8.1 #1 Example: Time-Windowed Melody Symbolic Music Generation System -- 6.8.2 #2 Example: Sequential Melody Symbolic Music Generation System -- 6.8.3 #3 Example: BLSTM Chord Accompaniment Symbolic Music Generation System -- 6.8.4 Summary -- 6.9 Melody-Harmony Interaction -- 6.9.1 #1 Example: RNN-RBM Polyphony Symbolic Music Generation System -- 6.9.1.1 Other RNN-RBM Systems -- 6.9.2 #2 Example: Hexahedria Polyphony Symbolic Music Generation Architecture 6.9.3 #3 Example: Bi-Axial LSTM Polyphony Symbolic Music Generation Architecture -- 6.10 Control -- 6.10.1 Dimensions of Control Strategies -- 6.10.2 Sampling -- 6.10.2.1 Sampling for Iterative Feedforward Generation -- 6.10.2.2 Sampling for Incremental Generation -- 6.10.2.3 Sampling for Variational Decoder Feedforward Generation -- 6.10.2.4 Sampling for Adversarial Generation -- 6.10.2.5 Sampling for Other Generation Strategies -- 6.10.3 Conditioning -- 6.10.3.1 #1 Example: Rhythm Symbolic Music Generation System -- 6.10.3.2 #2 Example:WaveNet Speech and Music Audio Generation System -- 6.10.3.3 #3 Example: MidiNet Pop Music Melody Symbolic Music Generation System -- 6.10.3.4 #4 Example: DeepJ Style-Specific Polyphony Symbolic Music Generation System -- 6.10.3.5 #5 Example: Anticipation-RNN Bach Melody Symbolic Music Generation System -- 6.10.3.6 #6 Example: VRASH Melody Symbolic Music Generation System -- 6.10.4 Input Manipulation -- 6.10.4.1 #1 Example: DeepHear Ragtime Counterpoint Symbolic Music Generation System -- 6.10.4.2 Relation to Variational Autoencoders -- 6.10.4.3 #2 Example: Deep Dream Psychedelic Images Generation System -- 6.10.4.4 #3 Example: Style Transfer Painting Generation System -- 6.10.4.5 Style Transfer vs Transfer Learning -- 6.10.4.6 #4 Example: Music Style Transfer -- 6.10.5 Input Manipulation and Sampling -- 6.10.5.1 Example: C-RBM Polyphony Symbolic Music Generation System -- 6.10.6 Reinforcement -- 6.10.6.1 Example: RL-Tuner Melody Symbolic Music Generation System -- 6.10.7 Unit Selection -- 6.10.7.1 Example: Unit Selection and Concatenation Symbolic Melody Generation System -- 6.11 Style Transfer -- 6.11.1 Composition Style Transfer -- 6.11.2 Timbre Style Transfer -- 6.11.2.1 Examples: Audio Timbre Style Transfer Systems -- 6.11.2.2 Limits and Challenges -- 6.11.3 Performance Style Transfer 6.11.4 Example: FlowComposer Composition Support Environment -- 6.12 Structure -- 6.12.1 Example: MusicVAE Multivoice Hierarchical Symbolic Music Generation System -- 6.12.2 Other Temporal Architectural Hierarchies -- 6.13 Originality -- 6.13.1 Conditioning -- 6.13.1.1 Example: MidiNet Melody Generation System -- 6.13.2 Creative Adversarial Networks -- 6.13.2.1 Creative Adversarial Networks Painting Generation System -- 6.14 Incrementality -- 6.14.1 Note Instantiation Strategies -- 6.14.2 Example: DeepBach Chorale Multivoice Symbolic Music Generation System -- 6.15 Interactivity -- 6.15.1 #1 Example: deepAutoController Audio Music Generation System -- 6.15.2 #2 Example: DeepBach Chorale Symbolic Music Generation System -- 6.15.3 Interface Definition -- 6.16 Adaptability -- 6.17 Explainability -- 6.17.1 #1 Example: BachBot Chorale Polyphonic Symbolic Music Generation System -- 6.17.2 #2 Example: deepAutoController Audio Music Generation System -- 6.17.3 Towards Automated Analysis -- 6.18 Discussion -- Chapter 7 Analysis -- 7.1 Referencing and Abbreviations -- 7.2 System Analysis -- 7.3 Correlation Analysis -- Chapter 8 Discussion and Conclusion -- 8.1 Global versus Time Step -- 8.2 Convolution versus Recurrent -- 8.3 Style Transfer and Transfer Learning -- 8.4 Cooperation -- 8.5 Specialization -- 8.6 Evaluation and Creativity -- 8.7 Conclusion -- References -- Glossary -- Index Computer music |
title | Deep Learning Techniques for Music Generation |
title_auth | Deep Learning Techniques for Music Generation |
title_exact_search | Deep Learning Techniques for Music Generation |
title_exact_search_txtP | Deep Learning Techniques for Music Generation |
title_full | Deep Learning Techniques for Music Generation |
title_fullStr | Deep Learning Techniques for Music Generation |
title_full_unstemmed | Deep Learning Techniques for Music Generation |
title_short | Deep Learning Techniques for Music Generation |
title_sort | deep learning techniques for music generation |
topic | Computer music |
topic_facet | Computer music |
work_keys_str_mv | AT briotjeanpierre deeplearningtechniquesformusicgeneration AT hadjeresgaetan deeplearningtechniquesformusicgeneration AT pachetfrancoisdavid deeplearningtechniquesformusicgeneration |