Hands-On Markov Models with Python :: Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem.
This book will help you become familiar with HMMs and different inference algorithms by working on real-world problems. You will start with an introduction to the basic concepts of Markov chains, Markov processes and then delve deeper into understanding hidden Markov models and its types using pract...
Gespeichert in:
1. Verfasser: | |
---|---|
Weitere Verfasser: | |
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Birmingham :
Packt Publishing Ltd,
2018.
|
Schlagworte: | |
Online-Zugang: | Volltext |
Zusammenfassung: | This book will help you become familiar with HMMs and different inference algorithms by working on real-world problems. You will start with an introduction to the basic concepts of Markov chains, Markov processes and then delve deeper into understanding hidden Markov models and its types using practical examples. |
Beschreibung: | 1 online resource (172 pages) |
ISBN: | 9781788629331 1788629337 |
Internformat
MARC
LEADER | 00000cam a2200000 i 4500 | ||
---|---|---|---|
001 | ZDB-4-EBA-on1056065211 | ||
003 | OCoLC | ||
005 | 20241004212047.0 | ||
006 | m o d | ||
007 | cr cnu---unuuu | ||
008 | 181006s2018 enk o 000 0 eng d | ||
040 | |a EBLCP |b eng |e pn |c EBLCP |d TEFOD |d MERUC |d CHVBK |d OCLCO |d LVT |d OCLCF |d OCLCQ |d UKAHL |d N$T |d OCLCQ |d NLW |d OCLCO |d NZAUC |d OCLCQ |d OCLCO |d OCLCL |d TMA |d OCLCQ | ||
020 | |a 9781788629331 |q (electronic bk.) | ||
020 | |a 1788629337 |q (electronic bk.) | ||
035 | |a (OCoLC)1056065211 | ||
037 | |a EC3DA449-96D8-4AB9-A3C1-CBCD11277E93 |b OverDrive, Inc. |n http://www.overdrive.com | ||
050 | 4 | |a QA76.73.P98 |b .A553 2018 | |
082 | 7 | |a 005.133 | |
049 | |a MAIN | ||
100 | 1 | |a Ankan, Ankur. | |
245 | 1 | 0 | |a Hands-On Markov Models with Python : |b Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |
260 | |a Birmingham : |b Packt Publishing Ltd, |c 2018. | ||
300 | |a 1 online resource (172 pages) | ||
336 | |a text |b txt |2 rdacontent | ||
337 | |a computer |b c |2 rdamedia | ||
338 | |a online resource |b cr |2 rdacarrier | ||
588 | 0 | |a Print version record. | |
505 | 0 | |a Cover; Title Page; Copyright and Credits; Packt Upsell; Contributors; Table of Contents; Preface; Chapter 1: Introduction to the Markov Process; Random variables; Random processes; Markov processes; Installing Python and packages; Installation on Windows; Installation on Linux; Markov chains or discrete-time Markov processes; Parameterization of Markov chains; Properties of Markov chains; Reducibility; Periodicity; Transience and recurrence; Mean recurrence time; Expected number of visits; Absorbing states; Ergodicity; Steady-state analysis and limiting distributions. | |
505 | 8 | |a Continuous-time Markov chainsExponential distributions; Poisson process; Continuous-time Markov chain example; Continuous-time Markov chain; Summary; Chapter 2: Hidden Markov Models; Markov models; State space models; The HMM; Parameterization of HMM; Generating an observation sequence; Installing Python packages; Evaluation of an HMM; Extensions of HMM; Factorial HMMs; Tree-structured HMM; Summary; Chapter 3: State Inference -- Predicting the States; State inference in HMM; Dynamic programming; Forward algorithm. | |
505 | 8 | |a Computing the conditional distribution of the hidden state given the observationsBackward algorithm; Forward-backward algorithm (smoothing); The Viterbi algorithm; Summary; Chapter 3: Parameter Learning Using Maximum Likelihood; Maximum likelihood learning; MLE in a coin toss; MLE for normal distributions; MLE for HMMs; Supervised learning; Code; Unsupervised learning; Viterbi learning algorithm; The Baum-Welch algorithm (expectation maximization); Code; Summary; Chapter 4: Parameter Inference Using the Bayesian Approach; Bayesian learning; Selecting the priors; Intractability. | |
505 | 8 | |a Bayesian learning in HMMApproximating required integrals; Sampling methods; Laplace approximations; Stolke and Omohundro's method; Variational methods; Code; Summary; Chapter 5: Time Series Predicting; Stock price prediction using HMM; Collecting stock price data; Features for stock price prediction; Predicting price using HMM; Summary; Chapter 6: Natural Language Processing; Part-of-speech tagging; Code; Getting data; Exploring the data; Finding the most frequent tag; Evaluating model accuracy; An HMM-based tagger; Speech recognition; Python packages for speech recognition. | |
505 | 8 | |a Basics of SpeechRecognitionSpeech recognition from audio files; Speech recognition using the microphone; Summary; Chapter 7: 2D HMM for Image Processing; Recap of 1D HMM; 2D HMMs; Algorithm; Assumptions for the 2D HMM model; Parameter estimation using EM; Summary; Chapter 8: Markov Decision Process; Reinforcement learning; Reward hypothesis; State of the environment and the agent; Components of an agent; The Markov reward process; Bellman equation; MDP; Code example; Summary; Other Books You May Enjoy; Index. | |
520 | |a This book will help you become familiar with HMMs and different inference algorithms by working on real-world problems. You will start with an introduction to the basic concepts of Markov chains, Markov processes and then delve deeper into understanding hidden Markov models and its types using practical examples. | ||
650 | 0 | |a Python. | |
650 | 0 | |a Machine learning. |0 http://id.loc.gov/authorities/subjects/sh85079324 | |
650 | 6 | |a Apprentissage automatique. | |
650 | 7 | |a Artificial intelligence. |2 bicssc | |
650 | 7 | |a Neural networks & fuzzy systems. |2 bicssc | |
650 | 7 | |a Natural language & machine translation. |2 bicssc | |
650 | 7 | |a Computers |x Intelligence (AI) & Semantics. |2 bisacsh | |
650 | 7 | |a Computers |x Neural Networks. |2 bisacsh | |
650 | 7 | |a Computers |x Natural Language Processing. |2 bisacsh | |
650 | 7 | |a Machine learning |2 fast | |
700 | 1 | |a Panda, Abinash. | |
776 | 0 | 8 | |i Print version: |a Ankan, Ankur. |t Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |d Birmingham : Packt Publishing Ltd, ©2018 |z 9781788625449 |
856 | 4 | 0 | |l FWS01 |p ZDB-4-EBA |q FWS_PDA_EBA |u https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1904975 |3 Volltext |
936 | |a BATCHLOAD | ||
938 | |a Askews and Holts Library Services |b ASKH |n BDZ0037798058 | ||
938 | |a EBL - Ebook Library |b EBLB |n EBL5529464 | ||
938 | |a EBSCOhost |b EBSC |n 1904975 | ||
994 | |a 92 |b GEBAY | ||
912 | |a ZDB-4-EBA | ||
049 | |a DE-863 |
Datensatz im Suchindex
DE-BY-FWS_katkey | ZDB-4-EBA-on1056065211 |
---|---|
_version_ | 1816882473699639296 |
adam_text | |
any_adam_object | |
author | Ankan, Ankur |
author2 | Panda, Abinash |
author2_role | |
author2_variant | a p ap |
author_facet | Ankan, Ankur Panda, Abinash |
author_role | |
author_sort | Ankan, Ankur |
author_variant | a a aa |
building | Verbundindex |
bvnumber | localFWS |
callnumber-first | Q - Science |
callnumber-label | QA76 |
callnumber-raw | QA76.73.P98 .A553 2018 |
callnumber-search | QA76.73.P98 .A553 2018 |
callnumber-sort | QA 276.73 P98 A553 42018 |
callnumber-subject | QA - Mathematics |
collection | ZDB-4-EBA |
contents | Cover; Title Page; Copyright and Credits; Packt Upsell; Contributors; Table of Contents; Preface; Chapter 1: Introduction to the Markov Process; Random variables; Random processes; Markov processes; Installing Python and packages; Installation on Windows; Installation on Linux; Markov chains or discrete-time Markov processes; Parameterization of Markov chains; Properties of Markov chains; Reducibility; Periodicity; Transience and recurrence; Mean recurrence time; Expected number of visits; Absorbing states; Ergodicity; Steady-state analysis and limiting distributions. Continuous-time Markov chainsExponential distributions; Poisson process; Continuous-time Markov chain example; Continuous-time Markov chain; Summary; Chapter 2: Hidden Markov Models; Markov models; State space models; The HMM; Parameterization of HMM; Generating an observation sequence; Installing Python packages; Evaluation of an HMM; Extensions of HMM; Factorial HMMs; Tree-structured HMM; Summary; Chapter 3: State Inference -- Predicting the States; State inference in HMM; Dynamic programming; Forward algorithm. Computing the conditional distribution of the hidden state given the observationsBackward algorithm; Forward-backward algorithm (smoothing); The Viterbi algorithm; Summary; Chapter 3: Parameter Learning Using Maximum Likelihood; Maximum likelihood learning; MLE in a coin toss; MLE for normal distributions; MLE for HMMs; Supervised learning; Code; Unsupervised learning; Viterbi learning algorithm; The Baum-Welch algorithm (expectation maximization); Code; Summary; Chapter 4: Parameter Inference Using the Bayesian Approach; Bayesian learning; Selecting the priors; Intractability. Bayesian learning in HMMApproximating required integrals; Sampling methods; Laplace approximations; Stolke and Omohundro's method; Variational methods; Code; Summary; Chapter 5: Time Series Predicting; Stock price prediction using HMM; Collecting stock price data; Features for stock price prediction; Predicting price using HMM; Summary; Chapter 6: Natural Language Processing; Part-of-speech tagging; Code; Getting data; Exploring the data; Finding the most frequent tag; Evaluating model accuracy; An HMM-based tagger; Speech recognition; Python packages for speech recognition. Basics of SpeechRecognitionSpeech recognition from audio files; Speech recognition using the microphone; Summary; Chapter 7: 2D HMM for Image Processing; Recap of 1D HMM; 2D HMMs; Algorithm; Assumptions for the 2D HMM model; Parameter estimation using EM; Summary; Chapter 8: Markov Decision Process; Reinforcement learning; Reward hypothesis; State of the environment and the agent; Components of an agent; The Markov reward process; Bellman equation; MDP; Code example; Summary; Other Books You May Enjoy; Index. |
ctrlnum | (OCoLC)1056065211 |
dewey-full | 005.133 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 005 - Computer programming, programs, data, security |
dewey-raw | 005.133 |
dewey-search | 005.133 |
dewey-sort | 15.133 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>05439cam a2200577 i 4500</leader><controlfield tag="001">ZDB-4-EBA-on1056065211</controlfield><controlfield tag="003">OCoLC</controlfield><controlfield tag="005">20241004212047.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr cnu---unuuu</controlfield><controlfield tag="008">181006s2018 enk o 000 0 eng d</controlfield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">EBLCP</subfield><subfield code="b">eng</subfield><subfield code="e">pn</subfield><subfield code="c">EBLCP</subfield><subfield code="d">TEFOD</subfield><subfield code="d">MERUC</subfield><subfield code="d">CHVBK</subfield><subfield code="d">OCLCO</subfield><subfield code="d">LVT</subfield><subfield code="d">OCLCF</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">UKAHL</subfield><subfield code="d">N$T</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">NLW</subfield><subfield code="d">OCLCO</subfield><subfield code="d">NZAUC</subfield><subfield code="d">OCLCQ</subfield><subfield code="d">OCLCO</subfield><subfield code="d">OCLCL</subfield><subfield code="d">TMA</subfield><subfield code="d">OCLCQ</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781788629331</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">1788629337</subfield><subfield code="q">(electronic bk.)</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1056065211</subfield></datafield><datafield tag="037" ind1=" " ind2=" "><subfield code="a">EC3DA449-96D8-4AB9-A3C1-CBCD11277E93</subfield><subfield code="b">OverDrive, Inc.</subfield><subfield code="n">http://www.overdrive.com</subfield></datafield><datafield tag="050" ind1=" " ind2="4"><subfield code="a">QA76.73.P98</subfield><subfield code="b">.A553 2018</subfield></datafield><datafield tag="082" ind1="7" ind2=" "><subfield code="a">005.133</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">MAIN</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Ankan, Ankur.</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Hands-On Markov Models with Python :</subfield><subfield code="b">Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem.</subfield></datafield><datafield tag="260" ind1=" " ind2=" "><subfield code="a">Birmingham :</subfield><subfield code="b">Packt Publishing Ltd,</subfield><subfield code="c">2018.</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (172 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="588" ind1="0" ind2=" "><subfield code="a">Print version record.</subfield></datafield><datafield tag="505" ind1="0" ind2=" "><subfield code="a">Cover; Title Page; Copyright and Credits; Packt Upsell; Contributors; Table of Contents; Preface; Chapter 1: Introduction to the Markov Process; Random variables; Random processes; Markov processes; Installing Python and packages; Installation on Windows; Installation on Linux; Markov chains or discrete-time Markov processes; Parameterization of Markov chains; Properties of Markov chains; Reducibility; Periodicity; Transience and recurrence; Mean recurrence time; Expected number of visits; Absorbing states; Ergodicity; Steady-state analysis and limiting distributions.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Continuous-time Markov chainsExponential distributions; Poisson process; Continuous-time Markov chain example; Continuous-time Markov chain; Summary; Chapter 2: Hidden Markov Models; Markov models; State space models; The HMM; Parameterization of HMM; Generating an observation sequence; Installing Python packages; Evaluation of an HMM; Extensions of HMM; Factorial HMMs; Tree-structured HMM; Summary; Chapter 3: State Inference -- Predicting the States; State inference in HMM; Dynamic programming; Forward algorithm.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Computing the conditional distribution of the hidden state given the observationsBackward algorithm; Forward-backward algorithm (smoothing); The Viterbi algorithm; Summary; Chapter 3: Parameter Learning Using Maximum Likelihood; Maximum likelihood learning; MLE in a coin toss; MLE for normal distributions; MLE for HMMs; Supervised learning; Code; Unsupervised learning; Viterbi learning algorithm; The Baum-Welch algorithm (expectation maximization); Code; Summary; Chapter 4: Parameter Inference Using the Bayesian Approach; Bayesian learning; Selecting the priors; Intractability.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Bayesian learning in HMMApproximating required integrals; Sampling methods; Laplace approximations; Stolke and Omohundro's method; Variational methods; Code; Summary; Chapter 5: Time Series Predicting; Stock price prediction using HMM; Collecting stock price data; Features for stock price prediction; Predicting price using HMM; Summary; Chapter 6: Natural Language Processing; Part-of-speech tagging; Code; Getting data; Exploring the data; Finding the most frequent tag; Evaluating model accuracy; An HMM-based tagger; Speech recognition; Python packages for speech recognition.</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Basics of SpeechRecognitionSpeech recognition from audio files; Speech recognition using the microphone; Summary; Chapter 7: 2D HMM for Image Processing; Recap of 1D HMM; 2D HMMs; Algorithm; Assumptions for the 2D HMM model; Parameter estimation using EM; Summary; Chapter 8: Markov Decision Process; Reinforcement learning; Reward hypothesis; State of the environment and the agent; Components of an agent; The Markov reward process; Bellman equation; MDP; Code example; Summary; Other Books You May Enjoy; Index.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This book will help you become familiar with HMMs and different inference algorithms by working on real-world problems. You will start with an introduction to the basic concepts of Markov chains, Markov processes and then delve deeper into understanding hidden Markov models and its types using practical examples.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Python.</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Machine learning.</subfield><subfield code="0">http://id.loc.gov/authorities/subjects/sh85079324</subfield></datafield><datafield tag="650" ind1=" " ind2="6"><subfield code="a">Apprentissage automatique.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Artificial intelligence.</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Neural networks & fuzzy systems.</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Natural language & machine translation.</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computers</subfield><subfield code="x">Intelligence (AI) & Semantics.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computers</subfield><subfield code="x">Neural Networks.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computers</subfield><subfield code="x">Natural Language Processing.</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Machine learning</subfield><subfield code="2">fast</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Panda, Abinash.</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Print version:</subfield><subfield code="a">Ankan, Ankur.</subfield><subfield code="t">Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem.</subfield><subfield code="d">Birmingham : Packt Publishing Ltd, ©2018</subfield><subfield code="z">9781788625449</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="l">FWS01</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FWS_PDA_EBA</subfield><subfield code="u">https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1904975</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="936" ind1=" " ind2=" "><subfield code="a">BATCHLOAD</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">Askews and Holts Library Services</subfield><subfield code="b">ASKH</subfield><subfield code="n">BDZ0037798058</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBL - Ebook Library</subfield><subfield code="b">EBLB</subfield><subfield code="n">EBL5529464</subfield></datafield><datafield tag="938" ind1=" " ind2=" "><subfield code="a">EBSCOhost</subfield><subfield code="b">EBSC</subfield><subfield code="n">1904975</subfield></datafield><datafield tag="994" ind1=" " ind2=" "><subfield code="a">92</subfield><subfield code="b">GEBAY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-863</subfield></datafield></record></collection> |
id | ZDB-4-EBA-on1056065211 |
illustrated | Not Illustrated |
indexdate | 2024-11-27T13:29:10Z |
institution | BVB |
isbn | 9781788629331 1788629337 |
language | English |
oclc_num | 1056065211 |
open_access_boolean | |
owner | MAIN DE-863 DE-BY-FWS |
owner_facet | MAIN DE-863 DE-BY-FWS |
physical | 1 online resource (172 pages) |
psigel | ZDB-4-EBA |
publishDate | 2018 |
publishDateSearch | 2018 |
publishDateSort | 2018 |
publisher | Packt Publishing Ltd, |
record_format | marc |
spelling | Ankan, Ankur. Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. Birmingham : Packt Publishing Ltd, 2018. 1 online resource (172 pages) text txt rdacontent computer c rdamedia online resource cr rdacarrier Print version record. Cover; Title Page; Copyright and Credits; Packt Upsell; Contributors; Table of Contents; Preface; Chapter 1: Introduction to the Markov Process; Random variables; Random processes; Markov processes; Installing Python and packages; Installation on Windows; Installation on Linux; Markov chains or discrete-time Markov processes; Parameterization of Markov chains; Properties of Markov chains; Reducibility; Periodicity; Transience and recurrence; Mean recurrence time; Expected number of visits; Absorbing states; Ergodicity; Steady-state analysis and limiting distributions. Continuous-time Markov chainsExponential distributions; Poisson process; Continuous-time Markov chain example; Continuous-time Markov chain; Summary; Chapter 2: Hidden Markov Models; Markov models; State space models; The HMM; Parameterization of HMM; Generating an observation sequence; Installing Python packages; Evaluation of an HMM; Extensions of HMM; Factorial HMMs; Tree-structured HMM; Summary; Chapter 3: State Inference -- Predicting the States; State inference in HMM; Dynamic programming; Forward algorithm. Computing the conditional distribution of the hidden state given the observationsBackward algorithm; Forward-backward algorithm (smoothing); The Viterbi algorithm; Summary; Chapter 3: Parameter Learning Using Maximum Likelihood; Maximum likelihood learning; MLE in a coin toss; MLE for normal distributions; MLE for HMMs; Supervised learning; Code; Unsupervised learning; Viterbi learning algorithm; The Baum-Welch algorithm (expectation maximization); Code; Summary; Chapter 4: Parameter Inference Using the Bayesian Approach; Bayesian learning; Selecting the priors; Intractability. Bayesian learning in HMMApproximating required integrals; Sampling methods; Laplace approximations; Stolke and Omohundro's method; Variational methods; Code; Summary; Chapter 5: Time Series Predicting; Stock price prediction using HMM; Collecting stock price data; Features for stock price prediction; Predicting price using HMM; Summary; Chapter 6: Natural Language Processing; Part-of-speech tagging; Code; Getting data; Exploring the data; Finding the most frequent tag; Evaluating model accuracy; An HMM-based tagger; Speech recognition; Python packages for speech recognition. Basics of SpeechRecognitionSpeech recognition from audio files; Speech recognition using the microphone; Summary; Chapter 7: 2D HMM for Image Processing; Recap of 1D HMM; 2D HMMs; Algorithm; Assumptions for the 2D HMM model; Parameter estimation using EM; Summary; Chapter 8: Markov Decision Process; Reinforcement learning; Reward hypothesis; State of the environment and the agent; Components of an agent; The Markov reward process; Bellman equation; MDP; Code example; Summary; Other Books You May Enjoy; Index. This book will help you become familiar with HMMs and different inference algorithms by working on real-world problems. You will start with an introduction to the basic concepts of Markov chains, Markov processes and then delve deeper into understanding hidden Markov models and its types using practical examples. Python. Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Apprentissage automatique. Artificial intelligence. bicssc Neural networks & fuzzy systems. bicssc Natural language & machine translation. bicssc Computers Intelligence (AI) & Semantics. bisacsh Computers Neural Networks. bisacsh Computers Natural Language Processing. bisacsh Machine learning fast Panda, Abinash. Print version: Ankan, Ankur. Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. Birmingham : Packt Publishing Ltd, ©2018 9781788625449 FWS01 ZDB-4-EBA FWS_PDA_EBA https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1904975 Volltext |
spellingShingle | Ankan, Ankur Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. Cover; Title Page; Copyright and Credits; Packt Upsell; Contributors; Table of Contents; Preface; Chapter 1: Introduction to the Markov Process; Random variables; Random processes; Markov processes; Installing Python and packages; Installation on Windows; Installation on Linux; Markov chains or discrete-time Markov processes; Parameterization of Markov chains; Properties of Markov chains; Reducibility; Periodicity; Transience and recurrence; Mean recurrence time; Expected number of visits; Absorbing states; Ergodicity; Steady-state analysis and limiting distributions. Continuous-time Markov chainsExponential distributions; Poisson process; Continuous-time Markov chain example; Continuous-time Markov chain; Summary; Chapter 2: Hidden Markov Models; Markov models; State space models; The HMM; Parameterization of HMM; Generating an observation sequence; Installing Python packages; Evaluation of an HMM; Extensions of HMM; Factorial HMMs; Tree-structured HMM; Summary; Chapter 3: State Inference -- Predicting the States; State inference in HMM; Dynamic programming; Forward algorithm. Computing the conditional distribution of the hidden state given the observationsBackward algorithm; Forward-backward algorithm (smoothing); The Viterbi algorithm; Summary; Chapter 3: Parameter Learning Using Maximum Likelihood; Maximum likelihood learning; MLE in a coin toss; MLE for normal distributions; MLE for HMMs; Supervised learning; Code; Unsupervised learning; Viterbi learning algorithm; The Baum-Welch algorithm (expectation maximization); Code; Summary; Chapter 4: Parameter Inference Using the Bayesian Approach; Bayesian learning; Selecting the priors; Intractability. Bayesian learning in HMMApproximating required integrals; Sampling methods; Laplace approximations; Stolke and Omohundro's method; Variational methods; Code; Summary; Chapter 5: Time Series Predicting; Stock price prediction using HMM; Collecting stock price data; Features for stock price prediction; Predicting price using HMM; Summary; Chapter 6: Natural Language Processing; Part-of-speech tagging; Code; Getting data; Exploring the data; Finding the most frequent tag; Evaluating model accuracy; An HMM-based tagger; Speech recognition; Python packages for speech recognition. Basics of SpeechRecognitionSpeech recognition from audio files; Speech recognition using the microphone; Summary; Chapter 7: 2D HMM for Image Processing; Recap of 1D HMM; 2D HMMs; Algorithm; Assumptions for the 2D HMM model; Parameter estimation using EM; Summary; Chapter 8: Markov Decision Process; Reinforcement learning; Reward hypothesis; State of the environment and the agent; Components of an agent; The Markov reward process; Bellman equation; MDP; Code example; Summary; Other Books You May Enjoy; Index. Python. Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Apprentissage automatique. Artificial intelligence. bicssc Neural networks & fuzzy systems. bicssc Natural language & machine translation. bicssc Computers Intelligence (AI) & Semantics. bisacsh Computers Neural Networks. bisacsh Computers Natural Language Processing. bisacsh Machine learning fast |
subject_GND | http://id.loc.gov/authorities/subjects/sh85079324 |
title | Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |
title_auth | Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |
title_exact_search | Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |
title_full | Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |
title_fullStr | Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |
title_full_unstemmed | Hands-On Markov Models with Python : Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |
title_short | Hands-On Markov Models with Python : |
title_sort | hands on markov models with python implement probabilistic models for learning complex data sequences using the python ecosystem |
title_sub | Implement Probabilistic Models for Learning Complex Data Sequences Using the Python Ecosystem. |
topic | Python. Machine learning. http://id.loc.gov/authorities/subjects/sh85079324 Apprentissage automatique. Artificial intelligence. bicssc Neural networks & fuzzy systems. bicssc Natural language & machine translation. bicssc Computers Intelligence (AI) & Semantics. bisacsh Computers Neural Networks. bisacsh Computers Natural Language Processing. bisacsh Machine learning fast |
topic_facet | Python. Machine learning. Apprentissage automatique. Artificial intelligence. Neural networks & fuzzy systems. Natural language & machine translation. Computers Intelligence (AI) & Semantics. Computers Neural Networks. Computers Natural Language Processing. Machine learning |
url | https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&AN=1904975 |
work_keys_str_mv | AT ankanankur handsonmarkovmodelswithpythonimplementprobabilisticmodelsforlearningcomplexdatasequencesusingthepythonecosystem AT pandaabinash handsonmarkovmodelswithpythonimplementprobabilisticmodelsforlearningcomplexdatasequencesusingthepythonecosystem |