Transformers for natural language processing: build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3
Intro -- Copyright -- Foreword -- Contributors -- Table of Contents -- Preface -- Chapter 1: What are Transformers? -- The ecosystem of transformers -- Industry 4.0 -- Foundation models -- Is programming becoming a sub-domain of NLP? -- The future of artificial intelligence specialists -- Optimizing...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham ; Mumbai
Packt
[March 2022]
|
Ausgabe: | Second edition |
Schriftenreihe: | Expert insight
|
Schlagworte: | |
Online-Zugang: | DE-Aug4 DE-M347 DE-898 DE-706 DE-29 DE-573 |
Zusammenfassung: | Intro -- Copyright -- Foreword -- Contributors -- Table of Contents -- Preface -- Chapter 1: What are Transformers? -- The ecosystem of transformers -- Industry 4.0 -- Foundation models -- Is programming becoming a sub-domain of NLP? -- The future of artificial intelligence specialists -- Optimizing NLP models with transformers -- The background of transformers -- What resources should we use? -- The rise of Transformer 4.0 seamless APIs -- Choosing ready-to-use API-driven libraries -- Choosing a Transformer Model -- The role of Industry 4.0 artificial intelligence specialists -- Summary -- Questions -- References -- Chapter 2: Getting Started with the Architecture of the Transformer Model -- The rise of the Transformer: Attention is All You Need -- The encoder stack -- Input embedding -- Positional encoding -- Sublayer 1: Multi-head attention -- Sublayer 2: Feedforward network -- The decoder stack -- Output embedding and position encoding -- The attention layers -- The FFN sublayer, the post-LN, and the linear layer -- Training and performance -- Tranformer models in Hugging Face -- Summary -- Questions -- References -- Chapter 3: Fine-Tuning BERT Models -- The architecture of BERT -- The encoder stack -- Preparing the pretraining input environment -- Pretraining and fine-tuning a BERT model -- Fine-tuning BERT -- Hardware constraints -- Installing the Hugging Face PyTorch interface for BERT -- Importing the modules -- Specifying CUDA as the device for torch -- Loading the dataset -- Creating sentences, label lists, and adding BERT tokens -- Activating the BERT tokenizer -- Processing the data -- Creating attention masks -- Splitting the data into training and validation sets -- Converting all the data into torch tensors -- Selecting a batch size and creating an iterator -- BERT model configuration |
Beschreibung: | Description based on publisher supplied metadata and other sources |
Beschreibung: | 1 Online-Ressource (xxxiii, 565 Seiten) Illustrationen (teilweise farbig) |
ISBN: | 9781803243481 1803243481 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV048295295 | ||
003 | DE-604 | ||
005 | 20250128 | ||
007 | cr|uuu---uuuuu | ||
008 | 220622s2022 xx a||| o|||| 00||| eng d | ||
020 | |a 9781803243481 |c EBook |9 978-1-80324-348-1 | ||
020 | |a 1803243481 |c EBook |9 1-80324-348-1 | ||
035 | |a (ZDB-30-PQE)EBC6938265 | ||
035 | |a (DE-599)KEP077195345 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-M347 |a DE-706 |a DE-29 |a DE-Aug4 |a DE-898 |a DE-11 |a DE-573 | ||
082 | 0 | |a 006.31 | |
084 | |a ST 306 |0 (DE-625)143654: |2 rvk | ||
084 | |a ST 302 |0 (DE-625)143652: |2 rvk | ||
084 | |a ST 302 |0 (DE-625)143652: |2 rvk | ||
100 | 1 | |a Rothman, Denis |e Verfasser |0 (DE-588)1221752987 |4 aut | |
245 | 1 | 0 | |a Transformers for natural language processing |b build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 |c Denis Rothman |
250 | |a Second edition | ||
264 | 1 | |a Birmingham ; Mumbai |b Packt |c [March 2022] | |
264 | 4 | |c © 2022 | |
300 | |a 1 Online-Ressource (xxxiii, 565 Seiten) |b Illustrationen (teilweise farbig) | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 0 | |a Expert insight | |
500 | |a Description based on publisher supplied metadata and other sources | ||
520 | 3 | |a Intro -- Copyright -- Foreword -- Contributors -- Table of Contents -- Preface -- Chapter 1: What are Transformers? -- The ecosystem of transformers -- Industry 4.0 -- Foundation models -- Is programming becoming a sub-domain of NLP? -- The future of artificial intelligence specialists -- Optimizing NLP models with transformers -- The background of transformers -- What resources should we use? -- The rise of Transformer 4.0 seamless APIs -- Choosing ready-to-use API-driven libraries -- Choosing a Transformer Model -- The role of Industry 4.0 artificial intelligence specialists -- Summary -- Questions -- References -- Chapter 2: Getting Started with the Architecture of the Transformer Model -- The rise of the Transformer: Attention is All You Need -- The encoder stack -- Input embedding -- Positional encoding -- Sublayer 1: Multi-head attention -- Sublayer 2: Feedforward network -- The decoder stack -- Output embedding and position encoding -- The attention layers -- The FFN sublayer, the post-LN, and the linear layer -- Training and performance -- Tranformer models in Hugging Face -- Summary -- Questions -- References -- Chapter 3: Fine-Tuning BERT Models -- The architecture of BERT -- The encoder stack -- Preparing the pretraining input environment -- Pretraining and fine-tuning a BERT model -- Fine-tuning BERT -- Hardware constraints -- Installing the Hugging Face PyTorch interface for BERT -- Importing the modules -- Specifying CUDA as the device for torch -- Loading the dataset -- Creating sentences, label lists, and adding BERT tokens -- Activating the BERT tokenizer -- Processing the data -- Creating attention masks -- Splitting the data into training and validation sets -- Converting all the data into torch tensors -- Selecting a batch size and creating an iterator -- BERT model configuration | |
650 | 7 | |a Computers / Artificial Intelligence / Natural Language Processing |2 BISAC | |
650 | 7 | |a Computers / Data Science / Neural Networks |2 BISAC | |
650 | 0 | 7 | |a Deep Learning |0 (DE-588)1135597375 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Natürliche Sprache |0 (DE-588)4041354-8 |2 gnd |9 rswk-swf |
653 | 0 | |a Electronic books | |
689 | 0 | 0 | |a Natürliche Sprache |0 (DE-588)4041354-8 |D s |
689 | 0 | 1 | |a Deep Learning |0 (DE-588)1135597375 |D s |
689 | 0 | |5 DE-604 | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |z 978-1-80324-733-5 |
912 | |a ZDB-30-PQE | ||
912 | |a ZDB-221-PDA | ||
912 | |a ZDB-221-PPK | ||
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-033675188 | |
966 | e | |u https://portal.igpublish.com/iglibrary/search/PACKT0006192.html |l DE-Aug4 |p ZDB-221-PPK |q FHA_PDA_PPK_Kauf |x Aggregator |3 Volltext | |
966 | e | |u https://portal.igpublish.com/iglibrary/search/PACKT0006192.html |l DE-M347 |p ZDB-221-PDA |q FHM_PDA_PDA_Kauf |x Aggregator |3 Volltext | |
966 | e | |u https://ebookcentral.proquest.com/lib/oth-regensburg/detail.action?docID=6938265 |l DE-898 |p ZDB-30-PQE |q FHR_PDA_PQE_Kauf |x Aggregator |3 Volltext | |
966 | e | |u https://portal.igpublish.com/iglibrary/search/PACKT0006192.html |l DE-706 |p ZDB-221-PDA |x Aggregator |3 Volltext | |
966 | e | |u https://ebookcentral.proquest.com/lib/erlangen/detail.action?docID=6938265 |l DE-29 |p ZDB-30-PQE |q UER_PDA_PQE_Kauf |x Aggregator |3 Volltext | |
966 | e | |u https://portal.igpublish.com/iglibrary/search/PACKT0006192.html |l DE-573 |p ZDB-221-PDA |x Aggregator |3 Volltext |
Datensatz im Suchindex
_version_ | 1823932150204858368 |
---|---|
adam_text | |
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Rothman, Denis |
author_GND | (DE-588)1221752987 |
author_facet | Rothman, Denis |
author_role | aut |
author_sort | Rothman, Denis |
author_variant | d r dr |
building | Verbundindex |
bvnumber | BV048295295 |
classification_rvk | ST 306 ST 302 |
collection | ZDB-30-PQE ZDB-221-PDA ZDB-221-PPK |
ctrlnum | (ZDB-30-PQE)EBC6938265 (DE-599)KEP077195345 |
dewey-full | 006.31 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.31 |
dewey-search | 006.31 |
dewey-sort | 16.31 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
discipline_str_mv | Informatik |
edition | Second edition |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nam a2200000 c 4500</leader><controlfield tag="001">BV048295295</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20250128</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">220622s2022 xx a||| o|||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781803243481</subfield><subfield code="c">EBook</subfield><subfield code="9">978-1-80324-348-1</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1803243481</subfield><subfield code="c">EBook</subfield><subfield code="9">1-80324-348-1</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-30-PQE)EBC6938265</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KEP077195345</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-M347</subfield><subfield code="a">DE-706</subfield><subfield code="a">DE-29</subfield><subfield code="a">DE-Aug4</subfield><subfield code="a">DE-898</subfield><subfield code="a">DE-11</subfield><subfield code="a">DE-573</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.31</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 306</subfield><subfield code="0">(DE-625)143654:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 302</subfield><subfield code="0">(DE-625)143652:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 302</subfield><subfield code="0">(DE-625)143652:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Rothman, Denis</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1221752987</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Transformers for natural language processing</subfield><subfield code="b">build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3</subfield><subfield code="c">Denis Rothman</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Second edition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Birmingham ; Mumbai</subfield><subfield code="b">Packt</subfield><subfield code="c">[March 2022]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2022</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (xxxiii, 565 Seiten)</subfield><subfield code="b">Illustrationen (teilweise farbig)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Expert insight</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Intro -- Copyright -- Foreword -- Contributors -- Table of Contents -- Preface -- Chapter 1: What are Transformers? -- The ecosystem of transformers -- Industry 4.0 -- Foundation models -- Is programming becoming a sub-domain of NLP? -- The future of artificial intelligence specialists -- Optimizing NLP models with transformers -- The background of transformers -- What resources should we use? -- The rise of Transformer 4.0 seamless APIs -- Choosing ready-to-use API-driven libraries -- Choosing a Transformer Model -- The role of Industry 4.0 artificial intelligence specialists -- Summary -- Questions -- References -- Chapter 2: Getting Started with the Architecture of the Transformer Model -- The rise of the Transformer: Attention is All You Need -- The encoder stack -- Input embedding -- Positional encoding -- Sublayer 1: Multi-head attention -- Sublayer 2: Feedforward network -- The decoder stack -- Output embedding and position encoding -- The attention layers -- The FFN sublayer, the post-LN, and the linear layer -- Training and performance -- Tranformer models in Hugging Face -- Summary -- Questions -- References -- Chapter 3: Fine-Tuning BERT Models -- The architecture of BERT -- The encoder stack -- Preparing the pretraining input environment -- Pretraining and fine-tuning a BERT model -- Fine-tuning BERT -- Hardware constraints -- Installing the Hugging Face PyTorch interface for BERT -- Importing the modules -- Specifying CUDA as the device for torch -- Loading the dataset -- Creating sentences, label lists, and adding BERT tokens -- Activating the BERT tokenizer -- Processing the data -- Creating attention masks -- Splitting the data into training and validation sets -- Converting all the data into torch tensors -- Selecting a batch size and creating an iterator -- BERT model configuration</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computers / Artificial Intelligence / Natural Language Processing</subfield><subfield code="2">BISAC</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computers / Data Science / Neural Networks</subfield><subfield code="2">BISAC</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Deep Learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Natürliche Sprache</subfield><subfield code="0">(DE-588)4041354-8</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electronic books</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Natürliche Sprache</subfield><subfield code="0">(DE-588)4041354-8</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Deep Learning</subfield><subfield code="0">(DE-588)1135597375</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="z">978-1-80324-733-5</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-PQE</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-221-PDA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-221-PPK</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033675188</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://portal.igpublish.com/iglibrary/search/PACKT0006192.html</subfield><subfield code="l">DE-Aug4</subfield><subfield code="p">ZDB-221-PPK</subfield><subfield code="q">FHA_PDA_PPK_Kauf</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://portal.igpublish.com/iglibrary/search/PACKT0006192.html</subfield><subfield code="l">DE-M347</subfield><subfield code="p">ZDB-221-PDA</subfield><subfield code="q">FHM_PDA_PDA_Kauf</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://ebookcentral.proquest.com/lib/oth-regensburg/detail.action?docID=6938265</subfield><subfield code="l">DE-898</subfield><subfield code="p">ZDB-30-PQE</subfield><subfield code="q">FHR_PDA_PQE_Kauf</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://portal.igpublish.com/iglibrary/search/PACKT0006192.html</subfield><subfield code="l">DE-706</subfield><subfield code="p">ZDB-221-PDA</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://ebookcentral.proquest.com/lib/erlangen/detail.action?docID=6938265</subfield><subfield code="l">DE-29</subfield><subfield code="p">ZDB-30-PQE</subfield><subfield code="q">UER_PDA_PQE_Kauf</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://portal.igpublish.com/iglibrary/search/PACKT0006192.html</subfield><subfield code="l">DE-573</subfield><subfield code="p">ZDB-221-PDA</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV048295295 |
illustrated | Illustrated |
index_date | 2024-07-03T20:04:55Z |
indexdate | 2025-02-13T09:00:46Z |
institution | BVB |
isbn | 9781803243481 1803243481 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033675188 |
open_access_boolean | |
owner | DE-M347 DE-706 DE-29 DE-Aug4 DE-898 DE-BY-UBR DE-11 DE-573 |
owner_facet | DE-M347 DE-706 DE-29 DE-Aug4 DE-898 DE-BY-UBR DE-11 DE-573 |
physical | 1 Online-Ressource (xxxiii, 565 Seiten) Illustrationen (teilweise farbig) |
psigel | ZDB-30-PQE ZDB-221-PDA ZDB-221-PPK ZDB-221-PPK FHA_PDA_PPK_Kauf ZDB-221-PDA FHM_PDA_PDA_Kauf ZDB-30-PQE FHR_PDA_PQE_Kauf ZDB-30-PQE UER_PDA_PQE_Kauf |
publishDate | 2022 |
publishDateSearch | 2022 |
publishDateSort | 2022 |
publisher | Packt |
record_format | marc |
series2 | Expert insight |
spelling | Rothman, Denis Verfasser (DE-588)1221752987 aut Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 Denis Rothman Second edition Birmingham ; Mumbai Packt [March 2022] © 2022 1 Online-Ressource (xxxiii, 565 Seiten) Illustrationen (teilweise farbig) txt rdacontent c rdamedia cr rdacarrier Expert insight Description based on publisher supplied metadata and other sources Intro -- Copyright -- Foreword -- Contributors -- Table of Contents -- Preface -- Chapter 1: What are Transformers? -- The ecosystem of transformers -- Industry 4.0 -- Foundation models -- Is programming becoming a sub-domain of NLP? -- The future of artificial intelligence specialists -- Optimizing NLP models with transformers -- The background of transformers -- What resources should we use? -- The rise of Transformer 4.0 seamless APIs -- Choosing ready-to-use API-driven libraries -- Choosing a Transformer Model -- The role of Industry 4.0 artificial intelligence specialists -- Summary -- Questions -- References -- Chapter 2: Getting Started with the Architecture of the Transformer Model -- The rise of the Transformer: Attention is All You Need -- The encoder stack -- Input embedding -- Positional encoding -- Sublayer 1: Multi-head attention -- Sublayer 2: Feedforward network -- The decoder stack -- Output embedding and position encoding -- The attention layers -- The FFN sublayer, the post-LN, and the linear layer -- Training and performance -- Tranformer models in Hugging Face -- Summary -- Questions -- References -- Chapter 3: Fine-Tuning BERT Models -- The architecture of BERT -- The encoder stack -- Preparing the pretraining input environment -- Pretraining and fine-tuning a BERT model -- Fine-tuning BERT -- Hardware constraints -- Installing the Hugging Face PyTorch interface for BERT -- Importing the modules -- Specifying CUDA as the device for torch -- Loading the dataset -- Creating sentences, label lists, and adding BERT tokens -- Activating the BERT tokenizer -- Processing the data -- Creating attention masks -- Splitting the data into training and validation sets -- Converting all the data into torch tensors -- Selecting a batch size and creating an iterator -- BERT model configuration Computers / Artificial Intelligence / Natural Language Processing BISAC Computers / Data Science / Neural Networks BISAC Deep Learning (DE-588)1135597375 gnd rswk-swf Natürliche Sprache (DE-588)4041354-8 gnd rswk-swf Electronic books Natürliche Sprache (DE-588)4041354-8 s Deep Learning (DE-588)1135597375 s DE-604 Erscheint auch als Druck-Ausgabe 978-1-80324-733-5 |
spellingShingle | Rothman, Denis Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 Computers / Artificial Intelligence / Natural Language Processing BISAC Computers / Data Science / Neural Networks BISAC Deep Learning (DE-588)1135597375 gnd Natürliche Sprache (DE-588)4041354-8 gnd |
subject_GND | (DE-588)1135597375 (DE-588)4041354-8 |
title | Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 |
title_auth | Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 |
title_exact_search | Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 |
title_exact_search_txtP | Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 |
title_full | Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 Denis Rothman |
title_fullStr | Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 Denis Rothman |
title_full_unstemmed | Transformers for natural language processing build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 Denis Rothman |
title_short | Transformers for natural language processing |
title_sort | transformers for natural language processing build train and fine tune deep neural network architectures for nlp with python pytorch tensorflow bert and gpt 3 |
title_sub | build, train, and fine-tune deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, and GPT-3 |
topic | Computers / Artificial Intelligence / Natural Language Processing BISAC Computers / Data Science / Neural Networks BISAC Deep Learning (DE-588)1135597375 gnd Natürliche Sprache (DE-588)4041354-8 gnd |
topic_facet | Computers / Artificial Intelligence / Natural Language Processing Computers / Data Science / Neural Networks Deep Learning Natürliche Sprache |
work_keys_str_mv | AT rothmandenis transformersfornaturallanguageprocessingbuildtrainandfinetunedeepneuralnetworkarchitecturesfornlpwithpythonpytorchtensorflowbertandgpt3 |