Lifelong machine learning:
This is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a mach...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
San Rafael, California
Morgan & Claypool
[2018]
|
Ausgabe: | Second edition |
Schriftenreihe: | Synthesis lectures on artificial intelligence and machine learning
#38 |
Schlagworte: | |
Online-Zugang: | UBR01 Volltext |
Zusammenfassung: | This is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent. Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks--which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning--most notably, multi-task learning, transfer learning, and metalearning--because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields |
Beschreibung: | Part of: Synthesis digital library of engineering and computer science Title from PDF title page (viewed on August 29, 2018) |
Beschreibung: | 1 Online-Resource (xix, 187 Seiten) Illustrationen |
ISBN: | 9781681733036 |
DOI: | 10.2200/S00832ED1V01Y201802AIM037 |
Internformat
MARC
LEADER | 00000nmm a2200000zcb4500 | ||
---|---|---|---|
001 | BV046427622 | ||
003 | DE-604 | ||
005 | 20220801 | ||
007 | cr|uuu---uuuuu | ||
008 | 200217s2018 |||| o||u| ||||||eng d | ||
020 | |a 9781681733036 |9 978-1-68173-303-6 | ||
024 | 7 | |a 10.2200/S00832ED1V01Y201802AIM037 |2 doi | |
035 | |a (ZDB-105-MCS)8438617 | ||
035 | |a (OCoLC)1141133786 | ||
035 | |a (DE-599)BVBBV046427622 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-355 | ||
082 | 0 | |a 006.31 |2 23 | |
100 | 1 | |a Chen, Zhiyuan |e Verfasser |0 (DE-588)1123153329 |4 aut | |
245 | 1 | 0 | |a Lifelong machine learning |c Zhiyuan Chen, Google, Inc., Bing Liu, University of Illinois at Chicago |
250 | |a Second edition | ||
264 | 1 | |a San Rafael, California |b Morgan & Claypool |c [2018] | |
300 | |a 1 Online-Resource (xix, 187 Seiten) |b Illustrationen | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 1 | |a Synthesis lectures on artificial intelligence and machine learning |v #38 | |
500 | |a Part of: Synthesis digital library of engineering and computer science | ||
500 | |a Title from PDF title page (viewed on August 29, 2018) | ||
520 | |a This is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent. Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. | ||
520 | |a The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks--which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning--most notably, multi-task learning, transfer learning, and metalearning--because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. | ||
520 | |a This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields | ||
650 | 4 | |a Machine learning | |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
689 | 0 | 0 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | |8 1\p |5 DE-604 | |
700 | 1 | |a Liu, Bing |d 1963- |e Verfasser |0 (DE-588)1014900026 |4 aut | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |z 9781681733029 |z 9781681733043 |
830 | 0 | |a Synthesis lectures on artificial intelligence and machine learning |v #38 |w (DE-604)BV043983076 |9 38 | |
856 | 4 | 0 | |u https://doi.org/10.2200/S00832ED1V01Y201802AIM037 |x Verlag |z URL des Erstveröffentlichers |3 Volltext |
912 | |a ZDB-105-MCS |a ZDB-105-MCB | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-031839925 | ||
883 | 1 | |8 1\p |a cgwrk |d 20201028 |q DE-101 |u https://d-nb.info/provenance/plan#cgwrk | |
966 | e | |u https://www.doi.org/10.1007/978-3-031-01581-6 |l UBR01 |p ZDB-105-MCB |q UBR_Pick&Choose 2022 |x Verlag |3 Volltext |
Datensatz im Suchindex
_version_ | 1804180976808493056 |
---|---|
any_adam_object | |
author | Chen, Zhiyuan Liu, Bing 1963- |
author_GND | (DE-588)1123153329 (DE-588)1014900026 |
author_facet | Chen, Zhiyuan Liu, Bing 1963- |
author_role | aut aut |
author_sort | Chen, Zhiyuan |
author_variant | z c zc b l bl |
building | Verbundindex |
bvnumber | BV046427622 |
collection | ZDB-105-MCS ZDB-105-MCB |
ctrlnum | (ZDB-105-MCS)8438617 (OCoLC)1141133786 (DE-599)BVBBV046427622 |
dewey-full | 006.31 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.31 |
dewey-search | 006.31 |
dewey-sort | 16.31 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik |
doi_str_mv | 10.2200/S00832ED1V01Y201802AIM037 |
edition | Second edition |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>04296nmm a2200505zcb4500</leader><controlfield tag="001">BV046427622</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20220801 </controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">200217s2018 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9781681733036</subfield><subfield code="9">978-1-68173-303-6</subfield></datafield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.2200/S00832ED1V01Y201802AIM037</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-105-MCS)8438617</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1141133786</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV046427622</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-355</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.31</subfield><subfield code="2">23</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chen, Zhiyuan</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1123153329</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Lifelong machine learning</subfield><subfield code="c">Zhiyuan Chen, Google, Inc., Bing Liu, University of Illinois at Chicago</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Second edition</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">San Rafael, California</subfield><subfield code="b">Morgan & Claypool</subfield><subfield code="c">[2018]</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Resource (xix, 187 Seiten)</subfield><subfield code="b">Illustrationen</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Synthesis lectures on artificial intelligence and machine learning</subfield><subfield code="v">#38</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Part of: Synthesis digital library of engineering and computer science</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Title from PDF title page (viewed on August 29, 2018)</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent. Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. </subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks--which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning--most notably, multi-task learning, transfer learning, and metalearning--because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. </subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="8">1\p</subfield><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Bing</subfield><subfield code="d">1963-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1014900026</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="z">9781681733029</subfield><subfield code="z">9781681733043</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Synthesis lectures on artificial intelligence and machine learning</subfield><subfield code="v">#38</subfield><subfield code="w">(DE-604)BV043983076</subfield><subfield code="9">38</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.2200/S00832ED1V01Y201802AIM037</subfield><subfield code="x">Verlag</subfield><subfield code="z">URL des Erstveröffentlichers</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-105-MCS</subfield><subfield code="a">ZDB-105-MCB</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-031839925</subfield></datafield><datafield tag="883" ind1="1" ind2=" "><subfield code="8">1\p</subfield><subfield code="a">cgwrk</subfield><subfield code="d">20201028</subfield><subfield code="q">DE-101</subfield><subfield code="u">https://d-nb.info/provenance/plan#cgwrk</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://www.doi.org/10.1007/978-3-031-01581-6</subfield><subfield code="l">UBR01</subfield><subfield code="p">ZDB-105-MCB</subfield><subfield code="q">UBR_Pick&Choose 2022</subfield><subfield code="x">Verlag</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV046427622 |
illustrated | Not Illustrated |
indexdate | 2024-07-10T08:44:19Z |
institution | BVB |
isbn | 9781681733036 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-031839925 |
oclc_num | 1141133786 |
open_access_boolean | |
owner | DE-355 DE-BY-UBR |
owner_facet | DE-355 DE-BY-UBR |
physical | 1 Online-Resource (xix, 187 Seiten) Illustrationen |
psigel | ZDB-105-MCS ZDB-105-MCB ZDB-105-MCB UBR_Pick&Choose 2022 |
publishDate | 2018 |
publishDateSearch | 2018 |
publishDateSort | 2018 |
publisher | Morgan & Claypool |
record_format | marc |
series | Synthesis lectures on artificial intelligence and machine learning |
series2 | Synthesis lectures on artificial intelligence and machine learning |
spelling | Chen, Zhiyuan Verfasser (DE-588)1123153329 aut Lifelong machine learning Zhiyuan Chen, Google, Inc., Bing Liu, University of Illinois at Chicago Second edition San Rafael, California Morgan & Claypool [2018] 1 Online-Resource (xix, 187 Seiten) Illustrationen txt rdacontent c rdamedia cr rdacarrier Synthesis lectures on artificial intelligence and machine learning #38 Part of: Synthesis digital library of engineering and computer science Title from PDF title page (viewed on August 29, 2018) This is an introduction to an advanced machine learning paradigm that continuously learns by accumulating past knowledge that it then uses in future learning and problem solving. In contrast, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model that is then used in its intended application. It makes no attempt to retain the learned knowledge and use it in subsequent learning. Unlike this isolated system, humans learn effectively with only a few examples precisely because our learning is very knowledge-driven: the knowledge learned in the past helps us learn new things with little data or effort. Lifelong learning aims to emulate this capability, because without it, an AI system cannot be considered truly intelligent. Research in lifelong learning has developed significantly in the relatively short time since the first edition of this book was published. The purpose of this second edition is to expand the definition of lifelong learning, update the content of several chapters, and add a new chapter about continual learning in deep neural networks--which has been actively researched over the past two or three years. A few chapters have also been reorganized to make each of them more coherent for the reader. Moreover, the authors want to propose a unified framework for the research area. Currently, there are several research topics in machine learning that are closely related to lifelong learning--most notably, multi-task learning, transfer learning, and metalearning--because they also employ the idea of knowledge sharing and transfer. This book brings all these topics under one roof and discusses their similarities and differences. Its goal is to introduce this emerging machine learning paradigm and present a comprehensive survey and review of the important research results and latest ideas in the area. This book is thus suitable for students, researchers, and practitioners who are interested in machine learning, data mining, natural language processing, or pattern recognition. Lecturers can readily use the book for courses in any of these related fields Machine learning Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 s 1\p DE-604 Liu, Bing 1963- Verfasser (DE-588)1014900026 aut Erscheint auch als Druck-Ausgabe 9781681733029 9781681733043 Synthesis lectures on artificial intelligence and machine learning #38 (DE-604)BV043983076 38 https://doi.org/10.2200/S00832ED1V01Y201802AIM037 Verlag URL des Erstveröffentlichers Volltext 1\p cgwrk 20201028 DE-101 https://d-nb.info/provenance/plan#cgwrk |
spellingShingle | Chen, Zhiyuan Liu, Bing 1963- Lifelong machine learning Synthesis lectures on artificial intelligence and machine learning Machine learning Maschinelles Lernen (DE-588)4193754-5 gnd |
subject_GND | (DE-588)4193754-5 |
title | Lifelong machine learning |
title_auth | Lifelong machine learning |
title_exact_search | Lifelong machine learning |
title_full | Lifelong machine learning Zhiyuan Chen, Google, Inc., Bing Liu, University of Illinois at Chicago |
title_fullStr | Lifelong machine learning Zhiyuan Chen, Google, Inc., Bing Liu, University of Illinois at Chicago |
title_full_unstemmed | Lifelong machine learning Zhiyuan Chen, Google, Inc., Bing Liu, University of Illinois at Chicago |
title_short | Lifelong machine learning |
title_sort | lifelong machine learning |
topic | Machine learning Maschinelles Lernen (DE-588)4193754-5 gnd |
topic_facet | Machine learning Maschinelles Lernen |
url | https://doi.org/10.2200/S00832ED1V01Y201802AIM037 |
volume_link | (DE-604)BV043983076 |
work_keys_str_mv | AT chenzhiyuan lifelongmachinelearning AT liubing lifelongmachinelearning |