Generative AI with Python and TensorFlow 2 :: harness the power of generative models to create images, text, and music /
Packed with intriguing real-world projects as well as theory, Generative AI with Python and TensorFlow 2 enables you to leverage artificial intelligence creatively and generate human-like data in the form of speech, text, images, and music.
Gespeichert in:
1. Verfasser: | |
---|---|
Weitere Verfasser: | |
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham :
Packt Publishing, Limited,
2021.
|
Schlagworte: | |
Online-Zugang: | Volltext |
Zusammenfassung: | Packed with intriguing real-world projects as well as theory, Generative AI with Python and TensorFlow 2 enables you to leverage artificial intelligence creatively and generate human-like data in the form of speech, text, images, and music. |
Beschreibung: | 1 online resource (489 pages) |
Bibliographie: | Includes bibliographical references and index. |
ISBN: | 1800208502 9781800208506 |
Internformat
MARC
LEADER | 00000cam a2200000 i 4500 | ||
---|---|---|---|
001 | ZDB-4-EBA-on1250079404 | ||
003 | OCoLC | ||
005 | 20241004212047.0 | ||
006 | m o d | ||
007 | cr cnu---unuuu | ||
008 | 210508s2021 enk ob 001 0 eng d | ||
040 | |a EBLCP |b eng |e rda |e pn |c EBLCP |d YDX |d NLW |d UKAHL |d N$T |d YDX |d OCLCF |d OCLCO |d UKMGB |d CZL |d OCLCO |d OCLCQ |d IEEEE |d OCLCO |d OCLCL |d QGK | ||
015 | |a GBC206097 |2 bnb | ||
016 | 7 | |a 020190163 |2 Uk | |
019 | |a 1250201058 | ||
020 | |a 1800208502 |q (electronic book) | ||
020 | |a 9781800208506 |q (electronic book) | ||
020 | |z 1800200889 | ||
020 | |z 9781800200883 | ||
035 | |a (OCoLC)1250079404 |z (OCoLC)1250201058 | ||
037 | |a 9781800208506 |b Packt Publishing | ||
037 | |a 10163291 |b IEEE | ||
050 | 4 | |a Q335 |b .B33 2021 | |
082 | 7 | |a 006.3 |2 23 | |
049 | |a MAIN | ||
100 | 1 | |a Babcock, Joseph, |e author. | |
245 | 1 | 0 | |a Generative AI with Python and TensorFlow 2 : |b harness the power of generative models to create images, text, and music / |c Joseph Babcock, Raghav Bali. |
264 | 1 | |a Birmingham : |b Packt Publishing, Limited, |c 2021. | |
300 | |a 1 online resource (489 pages) | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
588 | 0 | |a Online resource; title from PDF title page (viewed Janurary 3, 2022). | |
504 | |a Includes bibliographical references and index. | ||
520 | |a Packed with intriguing real-world projects as well as theory, Generative AI with Python and TensorFlow 2 enables you to leverage artificial intelligence creatively and generate human-like data in the form of speech, text, images, and music. | ||
505 | 0 | |a Home -- Copyright -- Contributors -- Table of Contents -- Preface -- Chapter 1: An Introduction to Generative AI: "Drawing" Data from Models -- Applications of AI -- Discriminative and generative models -- Implementing generative models -- The rules of probability -- Discriminative and generative modeling and Bayes' theorem -- Why use generative models? -- The promise of deep learning -- Building a better digit classifier -- Generating images -- Style transfer and image transformation -- Fake news and chatbots -- Sound composition -- The rules of the game -- Unique challenges of generative models -- Summary -- References -- Chapter 2: Setting Up a TensorFlow Lab -- Deep neural network development and TensorFlow -- TensorFlow 2.0 -- VSCode -- Docker: A lightweight virtualization solution -- Important Docker commands and syntax -- Connecting Docker containers with docker-compose -- Kubernetes: Robust management of multi-container applications -- Important Kubernetes commands -- Kustomize for configuration management -- Kubeflow: an end-to-end machine learning lab -- Running Kubeflow locally with MiniKF -- Installing Kubeflow in AWS -- Installing Kubeflow in GCP -- Installing Kubeflow on Azure -- Installing Kubeflow using Terraform -- A brief tour of Kubeflow's components -- Kubeflow notebook servers -- Kubeflow pipelines -- Using Kubeflow Katib to optimize model hyperparameters -- Summary -- References -- Chapter 3: Building Blocks of Deep Neural Networks -- Perceptrons -- a brain in a function -- From tissues to TLUs -- From TLUs to tuning perceptrons -- Multi-layer perceptrons and backpropagation -- Backpropagation in practice -- The shortfalls of backpropagation -- Varieties of networks: Convolution and recursive -- Networks for seeing: Convolutional architectures -- Early CNNs -- AlexNet and other CNN innovations -- AlexNet architecture. | |
505 | 8 | |a Networks for sequence data -- RNNs and LSTMs -- Building a better optimizer -- Gradient descent to ADAM -- Xavier initialization -- Summary -- References -- Chapter 4: Teaching Networks to Generate Digits -- The MNIST database -- Retrieving and loading the MNIST dataset in TensorFlow -- Restricted Boltzmann Machines: generating pixels with statistical mechanics -- Hopfield networks and energy equations for neural networks -- Modeling data with uncertainty with Restricted Boltzmann Machines -- Contrastive divergence: Approximating a gradient -- Stacking Restricted Boltzmann Machines to generate images: the Deep Belief Network -- Creating an RBM using the TensorFlow Keras layers API -- Creating a DBN with the Keras Model API -- Summary -- References -- Chapter 5: Painting Pictures with Neural Networks Using VAEs -- Creating separable encodings of images -- The variational objective -- The reparameterization trick -- Inverse Autoregressive Flow -- Importing CIFAR -- Creating the network from TensorFlow 2 -- Summary -- References -- Chapter 6: Image Generation with GANs -- The taxonomy of generative models -- Generative adversarial networks -- The generator model -- Training GANs -- Non-saturating generator cost -- Maximum likelihood game -- Vanilla GAN -- Improved GANs -- Deep Convolutional GAN -- Vector arithmetic -- Conditional GAN -- Wasserstein GAN -- Progressive GAN -- The overall method -- Progressive growth-smooth fade-in -- Minibatch standard deviation -- Equalized learning rate -- Pixelwise normalization -- TensorFlow Hub implementation -- Challenges -- Training instability -- Mode collapse -- Uninformative loss and evaluation metrics -- Summary -- References -- Chapter 7: Style Transfer with GANs -- Paired style transfer using pix2pix GAN -- The U-Net generator -- The Patch-GAN discriminator -- Loss -- Training pix2pix -- Use cases. | |
505 | 8 | |a Unpaired style transfer using CycleGAN -- Overall setup for CycleGAN -- Adversarial loss -- Cycle loss -- Identity loss -- Overall loss -- Hands-on: Unpaired style transfer with CycleGAN -- Generator setup -- Discriminator setup -- GAN setup -- The training loop -- Related works -- DiscoGAN -- DualGAN -- Summary -- References -- Chapter 8: Deepfakes with GANs -- Deepfakes overview -- Modes of operation -- Replacement -- Re-enactment -- Editing -- Key feature set -- Facial Action Coding System (FACS) -- 3D Morphable Model -- Facial landmarks -- Facial landmark detection using OpenCV -- Facial landmark detection using dlib -- Facial landmark detection using MTCNN -- High-level workflow -- Common architectures -- Encoder-Decoder (ED) -- Generative Adversarial Networks (GANs) -- Replacement using autoencoders -- Task definition -- Dataset preparation -- Autoencoder architecture -- Training our own face swapper -- Results and limitations -- Re-enactment using pix2pix -- Dataset preparation -- Pix2pix GAN setup and training -- Results and limitations -- Challenges -- Ethical issues -- Technical challenges -- Generalization -- Occlusions -- Temporal issues -- Off-the-shelf implementations -- Summary -- References -- Chapter 9: The Rise of Methods for Text Generation -- Representing text -- Bag of Words -- Distributed representation -- Word2vec -- GloVe -- FastText -- Text generation and the magic of LSTMs -- Language modeling -- Hands-on: Character-level language model -- Decoding strategies -- Greedy decoding -- Beam search -- Sampling -- Hands-on: Decoding strategies -- LSTM variants and convolutions for text -- Stacked LSTMs -- Bidirectional LSTMs -- Convolutions and text -- Summary -- References -- Chapter 10: NLP 2.0: Using Transformers to Generate Text -- Attention -- Contextual embeddings -- Self-attention -- Transformers -- Overall architecture. | |
505 | 8 | |a Multi-head self-attention -- Positional encodings -- BERT-ology -- GPT 1, 2, 3 ... -- Generative pre-training: GPT -- GPT-2 -- Hands-on with GPT-2 -- Mammoth GPT-3 -- Summary -- References -- Chapter 11: Composing Music with Generative Models -- Getting started with music generation -- Representing music -- Music generation using LSTMs -- Dataset preparation -- LSTM model for music generation -- Music generation using GANs -- Generator network -- Discriminator network -- Training and results -- MuseGAN -- polyphonic music generation -- Jamming model -- Composer model -- Hybrid model -- Temporal model -- MuseGAN -- Generators -- Critic -- Training and results -- Summary -- References -- Chapter 12: Play Video Games with Generative AI: GAIL -- Reinforcement learning: Actions, agents, spaces, policies, and rewards -- Deep Q-learning -- Inverse reinforcement learning: Learning from experts -- Adversarial learning and imitation -- Running GAIL on PyBullet Gym -- The agent: Actor-Critic network -- The discriminator -- Training and results -- Summary -- References -- Chapter 13: Emerging Applications in Generative AI -- Introduction -- Finding new drugs with generative models -- Searching chemical space with generative molecular graph networks -- Folding proteins with generative models -- Solving partial differential equations with generative modeling -- Few shot learning for creating videos from images -- Generating recipes with deep learning -- Summary -- References -- Other Books You May Enjoy -- Index. | |
630 | 0 | 7 | |a TensorFlow. |2 lemac |
650 | 0 | |a Artificial intelligence. |0 http://id.loc.gov/authorities/subjects/sh85008180 | |
650 | 0 | |a Pattern perception. |0 http://id.loc.gov/authorities/subjects/sh85098789 | |
650 | 0 | |a Computer vision. |0 http://id.loc.gov/authorities/subjects/sh85029549 | |
650 | 6 | |a Intelligence artificielle. | |
650 | 6 | |a Perception des structures. | |
650 | 6 | |a Vision par ordinateur. | |
650 | 7 | |a artificial intelligence. |2 aat | |
650 | 7 | |a Artificial intelligence. |2 bicssc | |
650 | 7 | |a Neural networks & fuzzy systems. |2 bicssc | |
650 | 7 | |a Pattern recognition. |2 bicssc | |
650 | 7 | |a Computer vision. |2 bicssc | |
650 | 7 | |a Computers |x Intelligence (AI) & Semantics. |2 bisacsh | |
650 | 7 | |a Computers |x Computer Vision & Pattern Recognition. |2 bisacsh | |
650 | 7 | |a Computers |x Neural Networks. |2 bisacsh | |
650 | 7 | |a Artificial intelligence |2 fast | |
650 | 7 | |a Computer vision |2 fast | |
650 | 7 | |a Pattern perception |2 fast | |
650 | 7 | |a Intel·ligència artificial. |2 lemac | |
650 | 7 | |a Python (Llenguatge de programació) |2 lemac | |
700 | 1 | |a Bali, Raghav. | |
758 | |i has work: |a GENERATIVE AI WITH PYTHON AND TENSORFLOW 2 (Text) |1 https://id.oclc.org/worldcat/entity/E39PCYKQCVWW9VDY3gD9P4x343 |4 https://id.oclc.org/worldcat/ontology/hasWork | ||
776 | 0 | 8 | |i Print version: |a Babcock, Joseph. |t Generative AI with Python and TensorFlow 2. |d Birmingham : Packt Publishing, Limited, ©2021 |z 9781800200883 |
856 | 4 | 0 | |l FWS01 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=2923092 |3 Volltext |
938 | |a Askews and Holts Library Services |b ASKH |n AH38618608 | ||
938 | |a ProQuest Ebook Central |b EBLB |n EBL6587289 | ||
938 | |a EBSCOhost |b EBSC |n 2923092 | ||
938 | |a YBP Library Services |b YANK |n 302162613 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBA | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBA-on1250079404 |
---|---|
_version_ | 1816882543526412288 |
adam_text | |
any_adam_object | |
author | Babcock, Joseph |
author2 | Bali, Raghav |
author2_role | |
author2_variant | r b rb |
author_facet | Babcock, Joseph Bali, Raghav |
author_role | aut |
author_sort | Babcock, Joseph |
author_variant | j b jb |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | Q335 |
callnumber-raw | Q335 .B33 2021 |
callnumber-search | Q335 .B33 2021 |
callnumber-sort | Q 3335 B33 42021 |
callnumber-subject | Q - General Science |
collection | ZDB-4-EBA |
contents | Home -- Copyright -- Contributors -- Table of Contents -- Preface -- Chapter 1: An Introduction to Generative AI: "Drawing" Data from Models -- Applications of AI -- Discriminative and generative models -- Implementing generative models -- The rules of probability -- Discriminative and generative modeling and Bayes' theorem -- Why use generative models? -- The promise of deep learning -- Building a better digit classifier -- Generating images -- Style transfer and image transformation -- Fake news and chatbots -- Sound composition -- The rules of the game -- Unique challenges of generative models -- Summary -- References -- Chapter 2: Setting Up a TensorFlow Lab -- Deep neural network development and TensorFlow -- TensorFlow 2.0 -- VSCode -- Docker: A lightweight virtualization solution -- Important Docker commands and syntax -- Connecting Docker containers with docker-compose -- Kubernetes: Robust management of multi-container applications -- Important Kubernetes commands -- Kustomize for configuration management -- Kubeflow: an end-to-end machine learning lab -- Running Kubeflow locally with MiniKF -- Installing Kubeflow in AWS -- Installing Kubeflow in GCP -- Installing Kubeflow on Azure -- Installing Kubeflow using Terraform -- A brief tour of Kubeflow's components -- Kubeflow notebook servers -- Kubeflow pipelines -- Using Kubeflow Katib to optimize model hyperparameters -- Summary -- References -- Chapter 3: Building Blocks of Deep Neural Networks -- Perceptrons -- a brain in a function -- From tissues to TLUs -- From TLUs to tuning perceptrons -- Multi-layer perceptrons and backpropagation -- Backpropagation in practice -- The shortfalls of backpropagation -- Varieties of networks: Convolution and recursive -- Networks for seeing: Convolutional architectures -- Early CNNs -- AlexNet and other CNN innovations -- AlexNet architecture. Networks for sequence data -- RNNs and LSTMs -- Building a better optimizer -- Gradient descent to ADAM -- Xavier initialization -- Summary -- References -- Chapter 4: Teaching Networks to Generate Digits -- The MNIST database -- Retrieving and loading the MNIST dataset in TensorFlow -- Restricted Boltzmann Machines: generating pixels with statistical mechanics -- Hopfield networks and energy equations for neural networks -- Modeling data with uncertainty with Restricted Boltzmann Machines -- Contrastive divergence: Approximating a gradient -- Stacking Restricted Boltzmann Machines to generate images: the Deep Belief Network -- Creating an RBM using the TensorFlow Keras layers API -- Creating a DBN with the Keras Model API -- Summary -- References -- Chapter 5: Painting Pictures with Neural Networks Using VAEs -- Creating separable encodings of images -- The variational objective -- The reparameterization trick -- Inverse Autoregressive Flow -- Importing CIFAR -- Creating the network from TensorFlow 2 -- Summary -- References -- Chapter 6: Image Generation with GANs -- The taxonomy of generative models -- Generative adversarial networks -- The generator model -- Training GANs -- Non-saturating generator cost -- Maximum likelihood game -- Vanilla GAN -- Improved GANs -- Deep Convolutional GAN -- Vector arithmetic -- Conditional GAN -- Wasserstein GAN -- Progressive GAN -- The overall method -- Progressive growth-smooth fade-in -- Minibatch standard deviation -- Equalized learning rate -- Pixelwise normalization -- TensorFlow Hub implementation -- Challenges -- Training instability -- Mode collapse -- Uninformative loss and evaluation metrics -- Summary -- References -- Chapter 7: Style Transfer with GANs -- Paired style transfer using pix2pix GAN -- The U-Net generator -- The Patch-GAN discriminator -- Loss -- Training pix2pix -- Use cases. Unpaired style transfer using CycleGAN -- Overall setup for CycleGAN -- Adversarial loss -- Cycle loss -- Identity loss -- Overall loss -- Hands-on: Unpaired style transfer with CycleGAN -- Generator setup -- Discriminator setup -- GAN setup -- The training loop -- Related works -- DiscoGAN -- DualGAN -- Summary -- References -- Chapter 8: Deepfakes with GANs -- Deepfakes overview -- Modes of operation -- Replacement -- Re-enactment -- Editing -- Key feature set -- Facial Action Coding System (FACS) -- 3D Morphable Model -- Facial landmarks -- Facial landmark detection using OpenCV -- Facial landmark detection using dlib -- Facial landmark detection using MTCNN -- High-level workflow -- Common architectures -- Encoder-Decoder (ED) -- Generative Adversarial Networks (GANs) -- Replacement using autoencoders -- Task definition -- Dataset preparation -- Autoencoder architecture -- Training our own face swapper -- Results and limitations -- Re-enactment using pix2pix -- Dataset preparation -- Pix2pix GAN setup and training -- Results and limitations -- Challenges -- Ethical issues -- Technical challenges -- Generalization -- Occlusions -- Temporal issues -- Off-the-shelf implementations -- Summary -- References -- Chapter 9: The Rise of Methods for Text Generation -- Representing text -- Bag of Words -- Distributed representation -- Word2vec -- GloVe -- FastText -- Text generation and the magic of LSTMs -- Language modeling -- Hands-on: Character-level language model -- Decoding strategies -- Greedy decoding -- Beam search -- Sampling -- Hands-on: Decoding strategies -- LSTM variants and convolutions for text -- Stacked LSTMs -- Bidirectional LSTMs -- Convolutions and text -- Summary -- References -- Chapter 10: NLP 2.0: Using Transformers to Generate Text -- Attention -- Contextual embeddings -- Self-attention -- Transformers -- Overall architecture. Multi-head self-attention -- Positional encodings -- BERT-ology -- GPT 1, 2, 3 ... -- Generative pre-training: GPT -- GPT-2 -- Hands-on with GPT-2 -- Mammoth GPT-3 -- Summary -- References -- Chapter 11: Composing Music with Generative Models -- Getting started with music generation -- Representing music -- Music generation using LSTMs -- Dataset preparation -- LSTM model for music generation -- Music generation using GANs -- Generator network -- Discriminator network -- Training and results -- MuseGAN -- polyphonic music generation -- Jamming model -- Composer model -- Hybrid model -- Temporal model -- MuseGAN -- Generators -- Critic -- Training and results -- Summary -- References -- Chapter 12: Play Video Games with Generative AI: GAIL -- Reinforcement learning: Actions, agents, spaces, policies, and rewards -- Deep Q-learning -- Inverse reinforcement learning: Learning from experts -- Adversarial learning and imitation -- Running GAIL on PyBullet Gym -- The agent: Actor-Critic network -- The discriminator -- Training and results -- Summary -- References -- Chapter 13: Emerging Applications in Generative AI -- Introduction -- Finding new drugs with generative models -- Searching chemical space with generative molecular graph networks -- Folding proteins with generative models -- Solving partial differential equations with generative modeling -- Few shot learning for creating videos from images -- Generating recipes with deep learning -- Summary -- References -- Other Books You May Enjoy -- Index. |
ctrlnum | (OCoLC)1250079404 |
dewey-full | 006.3 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3 |
dewey-search | 006.3 |
dewey-sort | 16.3 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>10671cam a2200781 i 4500</leader><controlfield tag="001">ZDB-4-EBA-on1250079404</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20241004212047.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr cnu---unuuu</controlfield><controlfield tag="008">210508s2021 enk ob 001 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">EBLCP</subfield><subfield code="b">eng</subfield><subfield code="e">rda</subfield><subfield code="e">pn</subfield><subfield code="c">EBLCP</subfield><subfield code="d">YDX</subfield><subfield code="d">NLW</subfield><subfield code="d">UKAHL</subfield><subfield code="d">N$T</subfield><subfield code="d">YDX</subfield><subfield code="d">OCLCF</subfield><subfield code="d">OCLCO</subfield><subfield code="d">UKMGB</subfield><subfield code="d">CZL</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">IEEEE</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCL</subfield><subfield code="d">QGK</subfield></datafield><datafield tag="015" ind1=" " ind2=" "><subfield code="a">GBC206097</subfield><subfield code="2">bnb</subfield></datafield><datafield tag="016" ind1="7" ind2=" "><subfield code="a">020190163</subfield><subfield code="2">Uk</subfield></datafield><datafield tag="019" ind1=" " ind2=" "><subfield code="a">1250201058</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1800208502</subfield><subfield code="q">(electronic book)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781800208506</subfield><subfield code="q">(electronic book)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">1800200889</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="z">9781800200883</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1250079404</subfield><subfield code="z">(OCoLC)1250201058</subfield></datafield><datafield tag="037" ind1=" " ind2=" "><subfield code="a">9781800208506</subfield><subfield code="b">Packt Publishing</subfield></datafield><datafield tag="037" ind1=" " ind2=" "><subfield code="a">10163291</subfield><subfield code="b">IEEE</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">Q335</subfield><subfield code="b">.B33 2021</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">006.3</subfield><subfield code="2">23</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Babcock, Joseph,</subfield><subfield code="e">author.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Generative AI with Python and TensorFlow 2 :</subfield><subfield code="b">harness the power of generative models to create images, text, and music /</subfield><subfield code="c">Joseph Babcock, Raghav Bali.</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham :</subfield><subfield code="b">Packt Publishing, Limited,</subfield><subfield code="c">2021.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (489 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Online resource; title from PDF title page (viewed Janurary 3, 2022).</subfield></datafield><datafield tag="504" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references and index.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Packed with intriguing real-world projects as well as theory, Generative AI with Python and TensorFlow 2 enables you to leverage artificial intelligence creatively and generate human-like data in the form of speech, text, images, and music.</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Home -- Copyright -- Contributors -- Table of Contents -- Preface -- Chapter 1: An Introduction to Generative AI: "Drawing" Data from Models -- Applications of AI -- Discriminative and generative models -- Implementing generative models -- The rules of probability -- Discriminative and generative modeling and Bayes' theorem -- Why use generative models? -- The promise of deep learning -- Building a better digit classifier -- Generating images -- Style transfer and image transformation -- Fake news and chatbots -- Sound composition -- The rules of the game -- Unique challenges of generative models -- Summary -- References -- Chapter 2: Setting Up a TensorFlow Lab -- Deep neural network development and TensorFlow -- TensorFlow 2.0 -- VSCode -- Docker: A lightweight virtualization solution -- Important Docker commands and syntax -- Connecting Docker containers with docker-compose -- Kubernetes: Robust management of multi-container applications -- Important Kubernetes commands -- Kustomize for configuration management -- Kubeflow: an end-to-end machine learning lab -- Running Kubeflow locally with MiniKF -- Installing Kubeflow in AWS -- Installing Kubeflow in GCP -- Installing Kubeflow on Azure -- Installing Kubeflow using Terraform -- A brief tour of Kubeflow's components -- Kubeflow notebook servers -- Kubeflow pipelines -- Using Kubeflow Katib to optimize model hyperparameters -- Summary -- References -- Chapter 3: Building Blocks of Deep Neural Networks -- Perceptrons -- a brain in a function -- From tissues to TLUs -- From TLUs to tuning perceptrons -- Multi-layer perceptrons and backpropagation -- Backpropagation in practice -- The shortfalls of backpropagation -- Varieties of networks: Convolution and recursive -- Networks for seeing: Convolutional architectures -- Early CNNs -- AlexNet and other CNN innovations -- AlexNet architecture.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Networks for sequence data -- RNNs and LSTMs -- Building a better optimizer -- Gradient descent to ADAM -- Xavier initialization -- Summary -- References -- Chapter 4: Teaching Networks to Generate Digits -- The MNIST database -- Retrieving and loading the MNIST dataset in TensorFlow -- Restricted Boltzmann Machines: generating pixels with statistical mechanics -- Hopfield networks and energy equations for neural networks -- Modeling data with uncertainty with Restricted Boltzmann Machines -- Contrastive divergence: Approximating a gradient -- Stacking Restricted Boltzmann Machines to generate images: the Deep Belief Network -- Creating an RBM using the TensorFlow Keras layers API -- Creating a DBN with the Keras Model API -- Summary -- References -- Chapter 5: Painting Pictures with Neural Networks Using VAEs -- Creating separable encodings of images -- The variational objective -- The reparameterization trick -- Inverse Autoregressive Flow -- Importing CIFAR -- Creating the network from TensorFlow 2 -- Summary -- References -- Chapter 6: Image Generation with GANs -- The taxonomy of generative models -- Generative adversarial networks -- The generator model -- Training GANs -- Non-saturating generator cost -- Maximum likelihood game -- Vanilla GAN -- Improved GANs -- Deep Convolutional GAN -- Vector arithmetic -- Conditional GAN -- Wasserstein GAN -- Progressive GAN -- The overall method -- Progressive growth-smooth fade-in -- Minibatch standard deviation -- Equalized learning rate -- Pixelwise normalization -- TensorFlow Hub implementation -- Challenges -- Training instability -- Mode collapse -- Uninformative loss and evaluation metrics -- Summary -- References -- Chapter 7: Style Transfer with GANs -- Paired style transfer using pix2pix GAN -- The U-Net generator -- The Patch-GAN discriminator -- Loss -- Training pix2pix -- Use cases.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Unpaired style transfer using CycleGAN -- Overall setup for CycleGAN -- Adversarial loss -- Cycle loss -- Identity loss -- Overall loss -- Hands-on: Unpaired style transfer with CycleGAN -- Generator setup -- Discriminator setup -- GAN setup -- The training loop -- Related works -- DiscoGAN -- DualGAN -- Summary -- References -- Chapter 8: Deepfakes with GANs -- Deepfakes overview -- Modes of operation -- Replacement -- Re-enactment -- Editing -- Key feature set -- Facial Action Coding System (FACS) -- 3D Morphable Model -- Facial landmarks -- Facial landmark detection using OpenCV -- Facial landmark detection using dlib -- Facial landmark detection using MTCNN -- High-level workflow -- Common architectures -- Encoder-Decoder (ED) -- Generative Adversarial Networks (GANs) -- Replacement using autoencoders -- Task definition -- Dataset preparation -- Autoencoder architecture -- Training our own face swapper -- Results and limitations -- Re-enactment using pix2pix -- Dataset preparation -- Pix2pix GAN setup and training -- Results and limitations -- Challenges -- Ethical issues -- Technical challenges -- Generalization -- Occlusions -- Temporal issues -- Off-the-shelf implementations -- Summary -- References -- Chapter 9: The Rise of Methods for Text Generation -- Representing text -- Bag of Words -- Distributed representation -- Word2vec -- GloVe -- FastText -- Text generation and the magic of LSTMs -- Language modeling -- Hands-on: Character-level language model -- Decoding strategies -- Greedy decoding -- Beam search -- Sampling -- Hands-on: Decoding strategies -- LSTM variants and convolutions for text -- Stacked LSTMs -- Bidirectional LSTMs -- Convolutions and text -- Summary -- References -- Chapter 10: NLP 2.0: Using Transformers to Generate Text -- Attention -- Contextual embeddings -- Self-attention -- Transformers -- Overall architecture.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Multi-head self-attention -- Positional encodings -- BERT-ology -- GPT 1, 2, 3 ... -- Generative pre-training: GPT -- GPT-2 -- Hands-on with GPT-2 -- Mammoth GPT-3 -- Summary -- References -- Chapter 11: Composing Music with Generative Models -- Getting started with music generation -- Representing music -- Music generation using LSTMs -- Dataset preparation -- LSTM model for music generation -- Music generation using GANs -- Generator network -- Discriminator network -- Training and results -- MuseGAN -- polyphonic music generation -- Jamming model -- Composer model -- Hybrid model -- Temporal model -- MuseGAN -- Generators -- Critic -- Training and results -- Summary -- References -- Chapter 12: Play Video Games with Generative AI: GAIL -- Reinforcement learning: Actions, agents, spaces, policies, and rewards -- Deep Q-learning -- Inverse reinforcement learning: Learning from experts -- Adversarial learning and imitation -- Running GAIL on PyBullet Gym -- The agent: Actor-Critic network -- The discriminator -- Training and results -- Summary -- References -- Chapter 13: Emerging Applications in Generative AI -- Introduction -- Finding new drugs with generative models -- Searching chemical space with generative molecular graph networks -- Folding proteins with generative models -- Solving partial differential equations with generative modeling -- Few shot learning for creating videos from images -- Generating recipes with deep learning -- Summary -- References -- Other Books You May Enjoy -- Index.</subfield></datafield><datafield tag="630" ind1="0" ind2="7"><subfield code="a">TensorFlow.</subfield><subfield code="2">lemac</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Artificial intelligence.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85008180</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Pattern perception.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85098789</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Computer vision.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85029549</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Intelligence artificielle.</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Perception des structures.</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Vision par ordinateur.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">artificial intelligence.</subfield><subfield code="2">aat</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Artificial intelligence.</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Neural networks & fuzzy systems.</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Pattern recognition.</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computer vision.</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computers</subfield><subfield code="x">Intelligence (AI) & Semantics.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computers</subfield><subfield code="x">Computer Vision & Pattern Recognition.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computers</subfield><subfield code="x">Neural Networks.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Artificial intelligence</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computer vision</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Pattern perception</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Intel·ligència artificial.</subfield><subfield code="2">lemac</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Python (Llenguatge de programació)</subfield><subfield code="2">lemac</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bali, Raghav.</subfield></datafield><datafield tag="758" ind1=" " ind2=" "><subfield code="i">has work:</subfield><subfield code="a">GENERATIVE AI WITH PYTHON AND TENSORFLOW 2 (Text)</subfield><subfield code="1">https://id.oclc.org/worldcat/entity/E39PCYKQCVWW9VDY3gD9P4x343</subfield><subfield code="4">https://id.oclc.org/worldcat/ontology/hasWork</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Babcock, Joseph.</subfield><subfield code="t">Generative AI with Python and TensorFlow 2.</subfield><subfield code="d">Birmingham : Packt Publishing, Limited, ©2021</subfield><subfield code="z">9781800200883</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">FWS01</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=2923092</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">Askews and Holts Library Services</subfield><subfield code="b">ASKH</subfield><subfield code="n">AH38618608</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">ProQuest Ebook Central</subfield><subfield code="b">EBLB</subfield><subfield code="n">EBL6587289</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">2923092</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">YBP Library Services</subfield><subfield code="b">YANK</subfield><subfield code="n">302162613</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBA-on1250079404 |
illustrated | Not Illustrated |
indexdate | 2024-11-27T13:30:17Z |
institution | BVB |
isbn | 1800208502 9781800208506 |
language | English |
oclc_num | 1250079404 |
open_access_boolean | |
owner | MAIN DE-863 DE-BY-FWS |
owner_facet | MAIN DE-863 DE-BY-FWS |
physical | 1 online resource (489 pages) |
psigel | ZDB-4-EBA |
publishDate | 2021 |
publishDateSearch | 2021 |
publishDateSort | 2021 |
publisher | Packt Publishing, Limited, |
record_format | marc |
spelling | Babcock, Joseph, author. Generative AI with Python and TensorFlow 2 : harness the power of generative models to create images, text, and music / Joseph Babcock, Raghav Bali. Birmingham : Packt Publishing, Limited, 2021. 1 online resource (489 pages) text txt rdacontent computer c rdamedia online resource cr rdacarrier Online resource; title from PDF title page (viewed Janurary 3, 2022). Includes bibliographical references and index. Packed with intriguing real-world projects as well as theory, Generative AI with Python and TensorFlow 2 enables you to leverage artificial intelligence creatively and generate human-like data in the form of speech, text, images, and music. Home -- Copyright -- Contributors -- Table of Contents -- Preface -- Chapter 1: An Introduction to Generative AI: "Drawing" Data from Models -- Applications of AI -- Discriminative and generative models -- Implementing generative models -- The rules of probability -- Discriminative and generative modeling and Bayes' theorem -- Why use generative models? -- The promise of deep learning -- Building a better digit classifier -- Generating images -- Style transfer and image transformation -- Fake news and chatbots -- Sound composition -- The rules of the game -- Unique challenges of generative models -- Summary -- References -- Chapter 2: Setting Up a TensorFlow Lab -- Deep neural network development and TensorFlow -- TensorFlow 2.0 -- VSCode -- Docker: A lightweight virtualization solution -- Important Docker commands and syntax -- Connecting Docker containers with docker-compose -- Kubernetes: Robust management of multi-container applications -- Important Kubernetes commands -- Kustomize for configuration management -- Kubeflow: an end-to-end machine learning lab -- Running Kubeflow locally with MiniKF -- Installing Kubeflow in AWS -- Installing Kubeflow in GCP -- Installing Kubeflow on Azure -- Installing Kubeflow using Terraform -- A brief tour of Kubeflow's components -- Kubeflow notebook servers -- Kubeflow pipelines -- Using Kubeflow Katib to optimize model hyperparameters -- Summary -- References -- Chapter 3: Building Blocks of Deep Neural Networks -- Perceptrons -- a brain in a function -- From tissues to TLUs -- From TLUs to tuning perceptrons -- Multi-layer perceptrons and backpropagation -- Backpropagation in practice -- The shortfalls of backpropagation -- Varieties of networks: Convolution and recursive -- Networks for seeing: Convolutional architectures -- Early CNNs -- AlexNet and other CNN innovations -- AlexNet architecture. Networks for sequence data -- RNNs and LSTMs -- Building a better optimizer -- Gradient descent to ADAM -- Xavier initialization -- Summary -- References -- Chapter 4: Teaching Networks to Generate Digits -- The MNIST database -- Retrieving and loading the MNIST dataset in TensorFlow -- Restricted Boltzmann Machines: generating pixels with statistical mechanics -- Hopfield networks and energy equations for neural networks -- Modeling data with uncertainty with Restricted Boltzmann Machines -- Contrastive divergence: Approximating a gradient -- Stacking Restricted Boltzmann Machines to generate images: the Deep Belief Network -- Creating an RBM using the TensorFlow Keras layers API -- Creating a DBN with the Keras Model API -- Summary -- References -- Chapter 5: Painting Pictures with Neural Networks Using VAEs -- Creating separable encodings of images -- The variational objective -- The reparameterization trick -- Inverse Autoregressive Flow -- Importing CIFAR -- Creating the network from TensorFlow 2 -- Summary -- References -- Chapter 6: Image Generation with GANs -- The taxonomy of generative models -- Generative adversarial networks -- The generator model -- Training GANs -- Non-saturating generator cost -- Maximum likelihood game -- Vanilla GAN -- Improved GANs -- Deep Convolutional GAN -- Vector arithmetic -- Conditional GAN -- Wasserstein GAN -- Progressive GAN -- The overall method -- Progressive growth-smooth fade-in -- Minibatch standard deviation -- Equalized learning rate -- Pixelwise normalization -- TensorFlow Hub implementation -- Challenges -- Training instability -- Mode collapse -- Uninformative loss and evaluation metrics -- Summary -- References -- Chapter 7: Style Transfer with GANs -- Paired style transfer using pix2pix GAN -- The U-Net generator -- The Patch-GAN discriminator -- Loss -- Training pix2pix -- Use cases. Unpaired style transfer using CycleGAN -- Overall setup for CycleGAN -- Adversarial loss -- Cycle loss -- Identity loss -- Overall loss -- Hands-on: Unpaired style transfer with CycleGAN -- Generator setup -- Discriminator setup -- GAN setup -- The training loop -- Related works -- DiscoGAN -- DualGAN -- Summary -- References -- Chapter 8: Deepfakes with GANs -- Deepfakes overview -- Modes of operation -- Replacement -- Re-enactment -- Editing -- Key feature set -- Facial Action Coding System (FACS) -- 3D Morphable Model -- Facial landmarks -- Facial landmark detection using OpenCV -- Facial landmark detection using dlib -- Facial landmark detection using MTCNN -- High-level workflow -- Common architectures -- Encoder-Decoder (ED) -- Generative Adversarial Networks (GANs) -- Replacement using autoencoders -- Task definition -- Dataset preparation -- Autoencoder architecture -- Training our own face swapper -- Results and limitations -- Re-enactment using pix2pix -- Dataset preparation -- Pix2pix GAN setup and training -- Results and limitations -- Challenges -- Ethical issues -- Technical challenges -- Generalization -- Occlusions -- Temporal issues -- Off-the-shelf implementations -- Summary -- References -- Chapter 9: The Rise of Methods for Text Generation -- Representing text -- Bag of Words -- Distributed representation -- Word2vec -- GloVe -- FastText -- Text generation and the magic of LSTMs -- Language modeling -- Hands-on: Character-level language model -- Decoding strategies -- Greedy decoding -- Beam search -- Sampling -- Hands-on: Decoding strategies -- LSTM variants and convolutions for text -- Stacked LSTMs -- Bidirectional LSTMs -- Convolutions and text -- Summary -- References -- Chapter 10: NLP 2.0: Using Transformers to Generate Text -- Attention -- Contextual embeddings -- Self-attention -- Transformers -- Overall architecture. Multi-head self-attention -- Positional encodings -- BERT-ology -- GPT 1, 2, 3 ... -- Generative pre-training: GPT -- GPT-2 -- Hands-on with GPT-2 -- Mammoth GPT-3 -- Summary -- References -- Chapter 11: Composing Music with Generative Models -- Getting started with music generation -- Representing music -- Music generation using LSTMs -- Dataset preparation -- LSTM model for music generation -- Music generation using GANs -- Generator network -- Discriminator network -- Training and results -- MuseGAN -- polyphonic music generation -- Jamming model -- Composer model -- Hybrid model -- Temporal model -- MuseGAN -- Generators -- Critic -- Training and results -- Summary -- References -- Chapter 12: Play Video Games with Generative AI: GAIL -- Reinforcement learning: Actions, agents, spaces, policies, and rewards -- Deep Q-learning -- Inverse reinforcement learning: Learning from experts -- Adversarial learning and imitation -- Running GAIL on PyBullet Gym -- The agent: Actor-Critic network -- The discriminator -- Training and results -- Summary -- References -- Chapter 13: Emerging Applications in Generative AI -- Introduction -- Finding new drugs with generative models -- Searching chemical space with generative molecular graph networks -- Folding proteins with generative models -- Solving partial differential equations with generative modeling -- Few shot learning for creating videos from images -- Generating recipes with deep learning -- Summary -- References -- Other Books You May Enjoy -- Index. TensorFlow. lemac Artificial intelligence. http://id.loc.gov/authorities/subjects/sh85008180 Pattern perception. http://id.loc.gov/authorities/subjects/sh85098789 Computer vision. http://id.loc.gov/authorities/subjects/sh85029549 Intelligence artificielle. Perception des structures. Vision par ordinateur. artificial intelligence. aat Artificial intelligence. bicssc Neural networks & fuzzy systems. bicssc Pattern recognition. bicssc Computer vision. bicssc Computers Intelligence (AI) & Semantics. bisacsh Computers Computer Vision & Pattern Recognition. bisacsh Computers Neural Networks. bisacsh Artificial intelligence fast Computer vision fast Pattern perception fast Intel·ligència artificial. lemac Python (Llenguatge de programació) lemac Bali, Raghav. has work: GENERATIVE AI WITH PYTHON AND TENSORFLOW 2 (Text) https://id.oclc.org/worldcat/entity/E39PCYKQCVWW9VDY3gD9P4x343 https://id.oclc.org/worldcat/ontology/hasWork Print version: Babcock, Joseph. Generative AI with Python and TensorFlow 2. Birmingham : Packt Publishing, Limited, ©2021 9781800200883 FWS01 ZDB-4-EBA FWS_PDA_EBA https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=2923092 Volltext |
spellingShingle | Babcock, Joseph Generative AI with Python and TensorFlow 2 : harness the power of generative models to create images, text, and music / Home -- Copyright -- Contributors -- Table of Contents -- Preface -- Chapter 1: An Introduction to Generative AI: "Drawing" Data from Models -- Applications of AI -- Discriminative and generative models -- Implementing generative models -- The rules of probability -- Discriminative and generative modeling and Bayes' theorem -- Why use generative models? -- The promise of deep learning -- Building a better digit classifier -- Generating images -- Style transfer and image transformation -- Fake news and chatbots -- Sound composition -- The rules of the game -- Unique challenges of generative models -- Summary -- References -- Chapter 2: Setting Up a TensorFlow Lab -- Deep neural network development and TensorFlow -- TensorFlow 2.0 -- VSCode -- Docker: A lightweight virtualization solution -- Important Docker commands and syntax -- Connecting Docker containers with docker-compose -- Kubernetes: Robust management of multi-container applications -- Important Kubernetes commands -- Kustomize for configuration management -- Kubeflow: an end-to-end machine learning lab -- Running Kubeflow locally with MiniKF -- Installing Kubeflow in AWS -- Installing Kubeflow in GCP -- Installing Kubeflow on Azure -- Installing Kubeflow using Terraform -- A brief tour of Kubeflow's components -- Kubeflow notebook servers -- Kubeflow pipelines -- Using Kubeflow Katib to optimize model hyperparameters -- Summary -- References -- Chapter 3: Building Blocks of Deep Neural Networks -- Perceptrons -- a brain in a function -- From tissues to TLUs -- From TLUs to tuning perceptrons -- Multi-layer perceptrons and backpropagation -- Backpropagation in practice -- The shortfalls of backpropagation -- Varieties of networks: Convolution and recursive -- Networks for seeing: Convolutional architectures -- Early CNNs -- AlexNet and other CNN innovations -- AlexNet architecture. Networks for sequence data -- RNNs and LSTMs -- Building a better optimizer -- Gradient descent to ADAM -- Xavier initialization -- Summary -- References -- Chapter 4: Teaching Networks to Generate Digits -- The MNIST database -- Retrieving and loading the MNIST dataset in TensorFlow -- Restricted Boltzmann Machines: generating pixels with statistical mechanics -- Hopfield networks and energy equations for neural networks -- Modeling data with uncertainty with Restricted Boltzmann Machines -- Contrastive divergence: Approximating a gradient -- Stacking Restricted Boltzmann Machines to generate images: the Deep Belief Network -- Creating an RBM using the TensorFlow Keras layers API -- Creating a DBN with the Keras Model API -- Summary -- References -- Chapter 5: Painting Pictures with Neural Networks Using VAEs -- Creating separable encodings of images -- The variational objective -- The reparameterization trick -- Inverse Autoregressive Flow -- Importing CIFAR -- Creating the network from TensorFlow 2 -- Summary -- References -- Chapter 6: Image Generation with GANs -- The taxonomy of generative models -- Generative adversarial networks -- The generator model -- Training GANs -- Non-saturating generator cost -- Maximum likelihood game -- Vanilla GAN -- Improved GANs -- Deep Convolutional GAN -- Vector arithmetic -- Conditional GAN -- Wasserstein GAN -- Progressive GAN -- The overall method -- Progressive growth-smooth fade-in -- Minibatch standard deviation -- Equalized learning rate -- Pixelwise normalization -- TensorFlow Hub implementation -- Challenges -- Training instability -- Mode collapse -- Uninformative loss and evaluation metrics -- Summary -- References -- Chapter 7: Style Transfer with GANs -- Paired style transfer using pix2pix GAN -- The U-Net generator -- The Patch-GAN discriminator -- Loss -- Training pix2pix -- Use cases. Unpaired style transfer using CycleGAN -- Overall setup for CycleGAN -- Adversarial loss -- Cycle loss -- Identity loss -- Overall loss -- Hands-on: Unpaired style transfer with CycleGAN -- Generator setup -- Discriminator setup -- GAN setup -- The training loop -- Related works -- DiscoGAN -- DualGAN -- Summary -- References -- Chapter 8: Deepfakes with GANs -- Deepfakes overview -- Modes of operation -- Replacement -- Re-enactment -- Editing -- Key feature set -- Facial Action Coding System (FACS) -- 3D Morphable Model -- Facial landmarks -- Facial landmark detection using OpenCV -- Facial landmark detection using dlib -- Facial landmark detection using MTCNN -- High-level workflow -- Common architectures -- Encoder-Decoder (ED) -- Generative Adversarial Networks (GANs) -- Replacement using autoencoders -- Task definition -- Dataset preparation -- Autoencoder architecture -- Training our own face swapper -- Results and limitations -- Re-enactment using pix2pix -- Dataset preparation -- Pix2pix GAN setup and training -- Results and limitations -- Challenges -- Ethical issues -- Technical challenges -- Generalization -- Occlusions -- Temporal issues -- Off-the-shelf implementations -- Summary -- References -- Chapter 9: The Rise of Methods for Text Generation -- Representing text -- Bag of Words -- Distributed representation -- Word2vec -- GloVe -- FastText -- Text generation and the magic of LSTMs -- Language modeling -- Hands-on: Character-level language model -- Decoding strategies -- Greedy decoding -- Beam search -- Sampling -- Hands-on: Decoding strategies -- LSTM variants and convolutions for text -- Stacked LSTMs -- Bidirectional LSTMs -- Convolutions and text -- Summary -- References -- Chapter 10: NLP 2.0: Using Transformers to Generate Text -- Attention -- Contextual embeddings -- Self-attention -- Transformers -- Overall architecture. Multi-head self-attention -- Positional encodings -- BERT-ology -- GPT 1, 2, 3 ... -- Generative pre-training: GPT -- GPT-2 -- Hands-on with GPT-2 -- Mammoth GPT-3 -- Summary -- References -- Chapter 11: Composing Music with Generative Models -- Getting started with music generation -- Representing music -- Music generation using LSTMs -- Dataset preparation -- LSTM model for music generation -- Music generation using GANs -- Generator network -- Discriminator network -- Training and results -- MuseGAN -- polyphonic music generation -- Jamming model -- Composer model -- Hybrid model -- Temporal model -- MuseGAN -- Generators -- Critic -- Training and results -- Summary -- References -- Chapter 12: Play Video Games with Generative AI: GAIL -- Reinforcement learning: Actions, agents, spaces, policies, and rewards -- Deep Q-learning -- Inverse reinforcement learning: Learning from experts -- Adversarial learning and imitation -- Running GAIL on PyBullet Gym -- The agent: Actor-Critic network -- The discriminator -- Training and results -- Summary -- References -- Chapter 13: Emerging Applications in Generative AI -- Introduction -- Finding new drugs with generative models -- Searching chemical space with generative molecular graph networks -- Folding proteins with generative models -- Solving partial differential equations with generative modeling -- Few shot learning for creating videos from images -- Generating recipes with deep learning -- Summary -- References -- Other Books You May Enjoy -- Index. TensorFlow. lemac Artificial intelligence. http://id.loc.gov/authorities/subjects/sh85008180 Pattern perception. http://id.loc.gov/authorities/subjects/sh85098789 Computer vision. http://id.loc.gov/authorities/subjects/sh85029549 Intelligence artificielle. Perception des structures. Vision par ordinateur. artificial intelligence. aat Artificial intelligence. bicssc Neural networks & fuzzy systems. bicssc Pattern recognition. bicssc Computer vision. bicssc Computers Intelligence (AI) & Semantics. bisacsh Computers Computer Vision & Pattern Recognition. bisacsh Computers Neural Networks. bisacsh Artificial intelligence fast Computer vision fast Pattern perception fast Intel·ligència artificial. lemac Python (Llenguatge de programació) lemac |
subject_GND | http://id.loc.gov/authorities/subjects/sh85008180 http://id.loc.gov/authorities/subjects/sh85098789 http://id.loc.gov/authorities/subjects/sh85029549 |
title | Generative AI with Python and TensorFlow 2 : harness the power of generative models to create images, text, and music / |
title_auth | Generative AI with Python and TensorFlow 2 : harness the power of generative models to create images, text, and music / |
title_exact_search | Generative AI with Python and TensorFlow 2 : harness the power of generative models to create images, text, and music / |
title_full | Generative AI with Python and TensorFlow 2 : harness the power of generative models to create images, text, and music / Joseph Babcock, Raghav Bali. |
title_fullStr | Generative AI with Python and TensorFlow 2 : harness the power of generative models to create images, text, and music / Joseph Babcock, Raghav Bali. |
title_full_unstemmed | Generative AI with Python and TensorFlow 2 : harness the power of generative models to create images, text, and music / Joseph Babcock, Raghav Bali. |
title_short | Generative AI with Python and TensorFlow 2 : |
title_sort | generative ai with python and tensorflow 2 harness the power of generative models to create images text and music |
title_sub | harness the power of generative models to create images, text, and music / |
topic | TensorFlow. lemac Artificial intelligence. http://id.loc.gov/authorities/subjects/sh85008180 Pattern perception. http://id.loc.gov/authorities/subjects/sh85098789 Computer vision. http://id.loc.gov/authorities/subjects/sh85029549 Intelligence artificielle. Perception des structures. Vision par ordinateur. artificial intelligence. aat Artificial intelligence. bicssc Neural networks & fuzzy systems. bicssc Pattern recognition. bicssc Computer vision. bicssc Computers Intelligence (AI) & Semantics. bisacsh Computers Computer Vision & Pattern Recognition. bisacsh Computers Neural Networks. bisacsh Artificial intelligence fast Computer vision fast Pattern perception fast Intel·ligència artificial. lemac Python (Llenguatge de programació) lemac |
topic_facet | TensorFlow. Artificial intelligence. Pattern perception. Computer vision. Intelligence artificielle. Perception des structures. Vision par ordinateur. artificial intelligence. Neural networks & fuzzy systems. Pattern recognition. Computers Intelligence (AI) & Semantics. Computers Computer Vision & Pattern Recognition. Computers Neural Networks. Artificial intelligence Computer vision Pattern perception Intel·ligència artificial. Python (Llenguatge de programació) |
url | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=2923092 |
work_keys_str_mv | AT babcockjoseph generativeaiwithpythonandtensorflow2harnessthepowerofgenerativemodelstocreateimagestextandmusic AT baliraghav generativeaiwithpythonandtensorflow2harnessthepowerofgenerativemodelstocreateimagestextandmusic |