Making a machine that sees like us:
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Oxford
Oxford University Press
2014
|
Schlagworte: | |
Online-Zugang: | FAW01 FAW02 Volltext |
Beschreibung: | Description based on print version record |
Beschreibung: | 1 online resource |
ISBN: | 0190228385 0199922543 0199922551 9780190228385 9780199922543 9780199922550 |
Internformat
MARC
LEADER | 00000nmm a2200000zc 4500 | ||
---|---|---|---|
001 | BV043027830 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | cr|uuu---uuuuu | ||
008 | 151120s2014 |||| o||u| ||||||eng d | ||
020 | |a 0190228385 |9 0-19-022838-5 | ||
020 | |a 0199922543 |9 0-19-992254-3 | ||
020 | |a 0199922551 |9 0-19-992255-1 | ||
020 | |a 9780190228385 |9 978-0-19-022838-5 | ||
020 | |a 9780199922543 |9 978-0-19-992254-3 | ||
020 | |a 9780199922550 |9 978-0-19-992255-0 | ||
035 | |a (OCoLC)880147846 | ||
035 | |a (DE-599)BVBBV043027830 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-1046 |a DE-1047 | ||
082 | 0 | |a 152.14 |2 23 | |
100 | 1 | |a Pizlo, Zygmunt |e Verfasser |4 aut | |
245 | 1 | 0 | |a Making a machine that sees like us |c Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawad, and Robert M. Steinman |
264 | 1 | |a Oxford |b Oxford University Press |c 2014 | |
300 | |a 1 online resource | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
500 | |a Description based on print version record | ||
505 | 8 | |a "Making a Machine That Sees Like Us explains why and how our visual perceptions can provide us with an accurate representation of the external world. Along the way, it tells the story of a machine (a computational model) built by the authors that solves the computationally difficult problem of seeing the way humans do. This accomplishment required a radical paradigm shift - one that challenged preconceptions about visual perception and tested the limits of human behavior-modeling for practical application. The text balances scientific sophistication and compelling storytelling, making it accessible to both technical and general readers. Online demonstrations and references to the authors' previously published papers detail how the machine was developed and what drove the ideas needed to make it work. The authors contextualize their new theory of shape perception by highlighting criticisms and opposing theories, offering readers a fascinating account not only of their revolutionary results, but of the scientific process that guided the way"-- | |
505 | 8 | |a Machine generated contents note: -- Making a Machine That Sees Like Us -- 1. How the Stage Was Set When We Began -- 1.1 Introduction -- 1.2 What is this book about? -- 1.3 Analytical and Operational definitions of shape -- 1.4 Shape constancy as a phenomenon (something you can observe) -- 1.5 Complexity makes shape unique -- 1.6 How would the world look if we are wrong? -- 1.7 What had happened in the real world while we were away -- 1.8 Perception viewed as an Inverse Problem -- 1.9 How Bayesian inference can be used for modeling perception -- 1.10 What it means to have a model of vision, and why we need to have one -- 1.11 End of the beginning. -- 2. How This All Got Started -- 2.1 Controversy about shape constancy: 1980 -- 1995 -- 2.2 Events surrounding the 29th European Conference on Visual Perception (ECVP), St. Petersburg, Russia, August 20 -- 25, 2006 where we first announced our paradigm shift -- | |
505 | 8 | |a 2.3 The role of constraints in recovering the 3D shapes of polyhedral objects from line-drawings -- 2.4 Events surrounding the 31st European Conference on Visual Perception (ECVP) Utrecht, NL, August 24 -- 28, 2008, where we had our first big public confrontation -- 2.5 Monocular 3D shape recovery of both synthetic and real objects -- 3. Symmetry in Vision, Inside and Outside of the Laboratory -- 3.1 Why and how approximate computations make visual analyses fast and perfect: the perception of slanted 2D mirror-symmetrical figures -- 3.2 How human beings perceive 2D mirror-symmetry from perspective images -- 3.3 Why 3D mirror-symmetry is more difficult than 2D symmetry -- 3.4 Updating the Ideal Observer: how human beings perceive 3D mirror-symmetry from perspective images -- 3.5 Important role of Generalized Cones in 3D shape perception: how human beings perceive 3D translational-symmetry from perspective images -- 3.6 Michael Layton's contribution to symmetry in shape perception -- | |
505 | 8 | |a 3.7 Leeuwenberg's attempt to develop a "Structural" explanation of Gestalt phenomena -- 4. Using Symmetry Is Not Simple -- 4.1 What is really going on? Examining the relationship between simplicity and likelihood -- 4.2 Clearly, simplicity is better than likelihood -- excluding degenerate views does not eliminate spurious 3D symmetrical interpretations -- 4.3 What goes with what? A new kind of Correspondence Problem -- 4.4 Everything becomes easier once symmetry is viewed as self-similarity: the first working solution of the Symmetry Correspondence Problem -- 5. A Second View Makes 3D Shape Perception Perfect -- 5.1 What we know about binocular vision and how we came to know it -- 5.2 How we worked out the binocular perception of symmetrical 3D shapes -- 5.3 How our new theory of shape perception, based on stereoacuity, accounts for old results -- 5.4 3D movies: what they are, what they want to be, and what it costs -- 5.5 Bayesian model of binocular shape perception -- | |
505 | 8 | |a 5.6 Why we could claim that our model is complete -- 6. Figure-Ground Organization, which Breaks Camouflage in Everyday Life, Permits the Veridical Recovery of a 3D Scene -- 6.1 Estimating the orientation of the ground-plane -- 6.2 How a coarse analysis of the positions and sizes of objects can be made -- 6.3 How a useful top-view representation was produced -- 6.4 Finding objects in the 2D image -- 6.5 Extracting relevant edges, grouping them and establishing symmetry correspondence -- 6.6 What can be done with a spatially-global map of a 3D scene? -- 7. What Made This Possible and What Comes Next? -- 7.1 Five Important conceptual contributions -- 7.2 Three of our technical contributions -- 7.3 Making our machine perceive and predict in dynamical environments -- 7.4 Solving the Figure-Ground Organization Problem with only a single 2D image -- 7.5 Recognizing individual objects by using a fast search of memory | |
650 | 7 | |a PSYCHOLOGY / Cognitive Psychology |2 bisacsh | |
650 | 7 | |a PSYCHOLOGY / Physiological Psychology |2 bisacsh | |
650 | 7 | |a Cognition |2 fast | |
650 | 7 | |a Visual perception |2 fast | |
650 | 4 | |a Visual perception | |
650 | 4 | |a Cognition | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |a Pizlo, Zygmunt |t Making a machine that sees like us |
856 | 4 | 0 | |u http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=779697 |x Aggregator |3 Volltext |
912 | |a ZDB-4-EBA | ||
999 | |a oai:aleph.bib-bvb.de:BVB01-028452484 | ||
966 | e | |u http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=779697 |l FAW01 |p ZDB-4-EBA |q FAW_PDA_EBA |x Aggregator |3 Volltext | |
966 | e | |u http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=779697 |l FAW02 |p ZDB-4-EBA |q FAW_PDA_EBA |x Aggregator |3 Volltext |
Datensatz im Suchindex
_version_ | 1804175382115516416 |
---|---|
any_adam_object | |
author | Pizlo, Zygmunt |
author_facet | Pizlo, Zygmunt |
author_role | aut |
author_sort | Pizlo, Zygmunt |
author_variant | z p zp |
building | Verbundindex |
bvnumber | BV043027830 |
collection | ZDB-4-EBA |
contents | "Making a Machine That Sees Like Us explains why and how our visual perceptions can provide us with an accurate representation of the external world. Along the way, it tells the story of a machine (a computational model) built by the authors that solves the computationally difficult problem of seeing the way humans do. This accomplishment required a radical paradigm shift - one that challenged preconceptions about visual perception and tested the limits of human behavior-modeling for practical application. The text balances scientific sophistication and compelling storytelling, making it accessible to both technical and general readers. Online demonstrations and references to the authors' previously published papers detail how the machine was developed and what drove the ideas needed to make it work. The authors contextualize their new theory of shape perception by highlighting criticisms and opposing theories, offering readers a fascinating account not only of their revolutionary results, but of the scientific process that guided the way"-- Machine generated contents note: -- Making a Machine That Sees Like Us -- 1. How the Stage Was Set When We Began -- 1.1 Introduction -- 1.2 What is this book about? -- 1.3 Analytical and Operational definitions of shape -- 1.4 Shape constancy as a phenomenon (something you can observe) -- 1.5 Complexity makes shape unique -- 1.6 How would the world look if we are wrong? -- 1.7 What had happened in the real world while we were away -- 1.8 Perception viewed as an Inverse Problem -- 1.9 How Bayesian inference can be used for modeling perception -- 1.10 What it means to have a model of vision, and why we need to have one -- 1.11 End of the beginning. -- 2. How This All Got Started -- 2.1 Controversy about shape constancy: 1980 -- 1995 -- 2.2 Events surrounding the 29th European Conference on Visual Perception (ECVP), St. Petersburg, Russia, August 20 -- 25, 2006 where we first announced our paradigm shift -- 2.3 The role of constraints in recovering the 3D shapes of polyhedral objects from line-drawings -- 2.4 Events surrounding the 31st European Conference on Visual Perception (ECVP) Utrecht, NL, August 24 -- 28, 2008, where we had our first big public confrontation -- 2.5 Monocular 3D shape recovery of both synthetic and real objects -- 3. Symmetry in Vision, Inside and Outside of the Laboratory -- 3.1 Why and how approximate computations make visual analyses fast and perfect: the perception of slanted 2D mirror-symmetrical figures -- 3.2 How human beings perceive 2D mirror-symmetry from perspective images -- 3.3 Why 3D mirror-symmetry is more difficult than 2D symmetry -- 3.4 Updating the Ideal Observer: how human beings perceive 3D mirror-symmetry from perspective images -- 3.5 Important role of Generalized Cones in 3D shape perception: how human beings perceive 3D translational-symmetry from perspective images -- 3.6 Michael Layton's contribution to symmetry in shape perception -- 3.7 Leeuwenberg's attempt to develop a "Structural" explanation of Gestalt phenomena -- 4. Using Symmetry Is Not Simple -- 4.1 What is really going on? Examining the relationship between simplicity and likelihood -- 4.2 Clearly, simplicity is better than likelihood -- excluding degenerate views does not eliminate spurious 3D symmetrical interpretations -- 4.3 What goes with what? A new kind of Correspondence Problem -- 4.4 Everything becomes easier once symmetry is viewed as self-similarity: the first working solution of the Symmetry Correspondence Problem -- 5. A Second View Makes 3D Shape Perception Perfect -- 5.1 What we know about binocular vision and how we came to know it -- 5.2 How we worked out the binocular perception of symmetrical 3D shapes -- 5.3 How our new theory of shape perception, based on stereoacuity, accounts for old results -- 5.4 3D movies: what they are, what they want to be, and what it costs -- 5.5 Bayesian model of binocular shape perception -- 5.6 Why we could claim that our model is complete -- 6. Figure-Ground Organization, which Breaks Camouflage in Everyday Life, Permits the Veridical Recovery of a 3D Scene -- 6.1 Estimating the orientation of the ground-plane -- 6.2 How a coarse analysis of the positions and sizes of objects can be made -- 6.3 How a useful top-view representation was produced -- 6.4 Finding objects in the 2D image -- 6.5 Extracting relevant edges, grouping them and establishing symmetry correspondence -- 6.6 What can be done with a spatially-global map of a 3D scene? -- 7. What Made This Possible and What Comes Next? -- 7.1 Five Important conceptual contributions -- 7.2 Three of our technical contributions -- 7.3 Making our machine perceive and predict in dynamical environments -- 7.4 Solving the Figure-Ground Organization Problem with only a single 2D image -- 7.5 Recognizing individual objects by using a fast search of memory |
ctrlnum | (OCoLC)880147846 (DE-599)BVBBV043027830 |
dewey-full | 152.14 |
dewey-hundreds | 100 - Philosophy & psychology |
dewey-ones | 152 - Perception, movement, emotions & drives |
dewey-raw | 152.14 |
dewey-search | 152.14 |
dewey-sort | 3152.14 |
dewey-tens | 150 - Psychology |
discipline | Psychologie |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>06884nmm a2200529zc 4500</leader><controlfield tag="001">BV043027830</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">151120s2014 |||| o||u| ||||||eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0190228385</subfield><subfield code="9">0-19-022838-5</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0199922543</subfield><subfield code="9">0-19-992254-3</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0199922551</subfield><subfield code="9">0-19-992255-1</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780190228385</subfield><subfield code="9">978-0-19-022838-5</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780199922543</subfield><subfield code="9">978-0-19-992254-3</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780199922550</subfield><subfield code="9">978-0-19-992255-0</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)880147846</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV043027830</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-1046</subfield><subfield code="a">DE-1047</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">152.14</subfield><subfield code="2">23</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Pizlo, Zygmunt</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Making a machine that sees like us</subfield><subfield code="c">Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawad, and Robert M. Steinman</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Oxford</subfield><subfield code="b">Oxford University Press</subfield><subfield code="c">2014</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Description based on print version record</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">"Making a Machine That Sees Like Us explains why and how our visual perceptions can provide us with an accurate representation of the external world. Along the way, it tells the story of a machine (a computational model) built by the authors that solves the computationally difficult problem of seeing the way humans do. This accomplishment required a radical paradigm shift - one that challenged preconceptions about visual perception and tested the limits of human behavior-modeling for practical application. The text balances scientific sophistication and compelling storytelling, making it accessible to both technical and general readers. Online demonstrations and references to the authors' previously published papers detail how the machine was developed and what drove the ideas needed to make it work. The authors contextualize their new theory of shape perception by highlighting criticisms and opposing theories, offering readers a fascinating account not only of their revolutionary results, but of the scientific process that guided the way"--</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Machine generated contents note: -- Making a Machine That Sees Like Us -- 1. How the Stage Was Set When We Began -- 1.1 Introduction -- 1.2 What is this book about? -- 1.3 Analytical and Operational definitions of shape -- 1.4 Shape constancy as a phenomenon (something you can observe) -- 1.5 Complexity makes shape unique -- 1.6 How would the world look if we are wrong? -- 1.7 What had happened in the real world while we were away -- 1.8 Perception viewed as an Inverse Problem -- 1.9 How Bayesian inference can be used for modeling perception -- 1.10 What it means to have a model of vision, and why we need to have one -- 1.11 End of the beginning. -- 2. How This All Got Started -- 2.1 Controversy about shape constancy: 1980 -- 1995 -- 2.2 Events surrounding the 29th European Conference on Visual Perception (ECVP), St. Petersburg, Russia, August 20 -- 25, 2006 where we first announced our paradigm shift -- </subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">2.3 The role of constraints in recovering the 3D shapes of polyhedral objects from line-drawings -- 2.4 Events surrounding the 31st European Conference on Visual Perception (ECVP) Utrecht, NL, August 24 -- 28, 2008, where we had our first big public confrontation -- 2.5 Monocular 3D shape recovery of both synthetic and real objects -- 3. Symmetry in Vision, Inside and Outside of the Laboratory -- 3.1 Why and how approximate computations make visual analyses fast and perfect: the perception of slanted 2D mirror-symmetrical figures -- 3.2 How human beings perceive 2D mirror-symmetry from perspective images -- 3.3 Why 3D mirror-symmetry is more difficult than 2D symmetry -- 3.4 Updating the Ideal Observer: how human beings perceive 3D mirror-symmetry from perspective images -- 3.5 Important role of Generalized Cones in 3D shape perception: how human beings perceive 3D translational-symmetry from perspective images -- 3.6 Michael Layton's contribution to symmetry in shape perception -- </subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">3.7 Leeuwenberg's attempt to develop a "Structural" explanation of Gestalt phenomena -- 4. Using Symmetry Is Not Simple -- 4.1 What is really going on? Examining the relationship between simplicity and likelihood -- 4.2 Clearly, simplicity is better than likelihood -- excluding degenerate views does not eliminate spurious 3D symmetrical interpretations -- 4.3 What goes with what? A new kind of Correspondence Problem -- 4.4 Everything becomes easier once symmetry is viewed as self-similarity: the first working solution of the Symmetry Correspondence Problem -- 5. A Second View Makes 3D Shape Perception Perfect -- 5.1 What we know about binocular vision and how we came to know it -- 5.2 How we worked out the binocular perception of symmetrical 3D shapes -- 5.3 How our new theory of shape perception, based on stereoacuity, accounts for old results -- 5.4 3D movies: what they are, what they want to be, and what it costs -- 5.5 Bayesian model of binocular shape perception -- </subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">5.6 Why we could claim that our model is complete -- 6. Figure-Ground Organization, which Breaks Camouflage in Everyday Life, Permits the Veridical Recovery of a 3D Scene -- 6.1 Estimating the orientation of the ground-plane -- 6.2 How a coarse analysis of the positions and sizes of objects can be made -- 6.3 How a useful top-view representation was produced -- 6.4 Finding objects in the 2D image -- 6.5 Extracting relevant edges, grouping them and establishing symmetry correspondence -- 6.6 What can be done with a spatially-global map of a 3D scene? -- 7. What Made This Possible and What Comes Next? -- 7.1 Five Important conceptual contributions -- 7.2 Three of our technical contributions -- 7.3 Making our machine perceive and predict in dynamical environments -- 7.4 Solving the Figure-Ground Organization Problem with only a single 2D image -- 7.5 Recognizing individual objects by using a fast search of memory</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">PSYCHOLOGY / Cognitive Psychology</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">PSYCHOLOGY / Physiological Psychology</subfield><subfield code="2">bisacsh</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Cognition</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Visual perception</subfield><subfield code="2">fast</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Visual perception</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cognition</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="a">Pizlo, Zygmunt</subfield><subfield code="t">Making a machine that sees like us</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=779697</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-4-EBA</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-028452484</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=779697</subfield><subfield code="l">FAW01</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FAW_PDA_EBA</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=779697</subfield><subfield code="l">FAW02</subfield><subfield code="p">ZDB-4-EBA</subfield><subfield code="q">FAW_PDA_EBA</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
id | DE-604.BV043027830 |
illustrated | Not Illustrated |
indexdate | 2024-07-10T07:15:24Z |
institution | BVB |
isbn | 0190228385 0199922543 0199922551 9780190228385 9780199922543 9780199922550 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-028452484 |
oclc_num | 880147846 |
open_access_boolean | |
owner | DE-1046 DE-1047 |
owner_facet | DE-1046 DE-1047 |
physical | 1 online resource |
psigel | ZDB-4-EBA ZDB-4-EBA FAW_PDA_EBA |
publishDate | 2014 |
publishDateSearch | 2014 |
publishDateSort | 2014 |
publisher | Oxford University Press |
record_format | marc |
spelling | Pizlo, Zygmunt Verfasser aut Making a machine that sees like us Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawad, and Robert M. Steinman Oxford Oxford University Press 2014 1 online resource txt rdacontent c rdamedia cr rdacarrier Description based on print version record "Making a Machine That Sees Like Us explains why and how our visual perceptions can provide us with an accurate representation of the external world. Along the way, it tells the story of a machine (a computational model) built by the authors that solves the computationally difficult problem of seeing the way humans do. This accomplishment required a radical paradigm shift - one that challenged preconceptions about visual perception and tested the limits of human behavior-modeling for practical application. The text balances scientific sophistication and compelling storytelling, making it accessible to both technical and general readers. Online demonstrations and references to the authors' previously published papers detail how the machine was developed and what drove the ideas needed to make it work. The authors contextualize their new theory of shape perception by highlighting criticisms and opposing theories, offering readers a fascinating account not only of their revolutionary results, but of the scientific process that guided the way"-- Machine generated contents note: -- Making a Machine That Sees Like Us -- 1. How the Stage Was Set When We Began -- 1.1 Introduction -- 1.2 What is this book about? -- 1.3 Analytical and Operational definitions of shape -- 1.4 Shape constancy as a phenomenon (something you can observe) -- 1.5 Complexity makes shape unique -- 1.6 How would the world look if we are wrong? -- 1.7 What had happened in the real world while we were away -- 1.8 Perception viewed as an Inverse Problem -- 1.9 How Bayesian inference can be used for modeling perception -- 1.10 What it means to have a model of vision, and why we need to have one -- 1.11 End of the beginning. -- 2. How This All Got Started -- 2.1 Controversy about shape constancy: 1980 -- 1995 -- 2.2 Events surrounding the 29th European Conference on Visual Perception (ECVP), St. Petersburg, Russia, August 20 -- 25, 2006 where we first announced our paradigm shift -- 2.3 The role of constraints in recovering the 3D shapes of polyhedral objects from line-drawings -- 2.4 Events surrounding the 31st European Conference on Visual Perception (ECVP) Utrecht, NL, August 24 -- 28, 2008, where we had our first big public confrontation -- 2.5 Monocular 3D shape recovery of both synthetic and real objects -- 3. Symmetry in Vision, Inside and Outside of the Laboratory -- 3.1 Why and how approximate computations make visual analyses fast and perfect: the perception of slanted 2D mirror-symmetrical figures -- 3.2 How human beings perceive 2D mirror-symmetry from perspective images -- 3.3 Why 3D mirror-symmetry is more difficult than 2D symmetry -- 3.4 Updating the Ideal Observer: how human beings perceive 3D mirror-symmetry from perspective images -- 3.5 Important role of Generalized Cones in 3D shape perception: how human beings perceive 3D translational-symmetry from perspective images -- 3.6 Michael Layton's contribution to symmetry in shape perception -- 3.7 Leeuwenberg's attempt to develop a "Structural" explanation of Gestalt phenomena -- 4. Using Symmetry Is Not Simple -- 4.1 What is really going on? Examining the relationship between simplicity and likelihood -- 4.2 Clearly, simplicity is better than likelihood -- excluding degenerate views does not eliminate spurious 3D symmetrical interpretations -- 4.3 What goes with what? A new kind of Correspondence Problem -- 4.4 Everything becomes easier once symmetry is viewed as self-similarity: the first working solution of the Symmetry Correspondence Problem -- 5. A Second View Makes 3D Shape Perception Perfect -- 5.1 What we know about binocular vision and how we came to know it -- 5.2 How we worked out the binocular perception of symmetrical 3D shapes -- 5.3 How our new theory of shape perception, based on stereoacuity, accounts for old results -- 5.4 3D movies: what they are, what they want to be, and what it costs -- 5.5 Bayesian model of binocular shape perception -- 5.6 Why we could claim that our model is complete -- 6. Figure-Ground Organization, which Breaks Camouflage in Everyday Life, Permits the Veridical Recovery of a 3D Scene -- 6.1 Estimating the orientation of the ground-plane -- 6.2 How a coarse analysis of the positions and sizes of objects can be made -- 6.3 How a useful top-view representation was produced -- 6.4 Finding objects in the 2D image -- 6.5 Extracting relevant edges, grouping them and establishing symmetry correspondence -- 6.6 What can be done with a spatially-global map of a 3D scene? -- 7. What Made This Possible and What Comes Next? -- 7.1 Five Important conceptual contributions -- 7.2 Three of our technical contributions -- 7.3 Making our machine perceive and predict in dynamical environments -- 7.4 Solving the Figure-Ground Organization Problem with only a single 2D image -- 7.5 Recognizing individual objects by using a fast search of memory PSYCHOLOGY / Cognitive Psychology bisacsh PSYCHOLOGY / Physiological Psychology bisacsh Cognition fast Visual perception fast Visual perception Cognition Erscheint auch als Druck-Ausgabe Pizlo, Zygmunt Making a machine that sees like us http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=779697 Aggregator Volltext |
spellingShingle | Pizlo, Zygmunt Making a machine that sees like us "Making a Machine That Sees Like Us explains why and how our visual perceptions can provide us with an accurate representation of the external world. Along the way, it tells the story of a machine (a computational model) built by the authors that solves the computationally difficult problem of seeing the way humans do. This accomplishment required a radical paradigm shift - one that challenged preconceptions about visual perception and tested the limits of human behavior-modeling for practical application. The text balances scientific sophistication and compelling storytelling, making it accessible to both technical and general readers. Online demonstrations and references to the authors' previously published papers detail how the machine was developed and what drove the ideas needed to make it work. The authors contextualize their new theory of shape perception by highlighting criticisms and opposing theories, offering readers a fascinating account not only of their revolutionary results, but of the scientific process that guided the way"-- Machine generated contents note: -- Making a Machine That Sees Like Us -- 1. How the Stage Was Set When We Began -- 1.1 Introduction -- 1.2 What is this book about? -- 1.3 Analytical and Operational definitions of shape -- 1.4 Shape constancy as a phenomenon (something you can observe) -- 1.5 Complexity makes shape unique -- 1.6 How would the world look if we are wrong? -- 1.7 What had happened in the real world while we were away -- 1.8 Perception viewed as an Inverse Problem -- 1.9 How Bayesian inference can be used for modeling perception -- 1.10 What it means to have a model of vision, and why we need to have one -- 1.11 End of the beginning. -- 2. How This All Got Started -- 2.1 Controversy about shape constancy: 1980 -- 1995 -- 2.2 Events surrounding the 29th European Conference on Visual Perception (ECVP), St. Petersburg, Russia, August 20 -- 25, 2006 where we first announced our paradigm shift -- 2.3 The role of constraints in recovering the 3D shapes of polyhedral objects from line-drawings -- 2.4 Events surrounding the 31st European Conference on Visual Perception (ECVP) Utrecht, NL, August 24 -- 28, 2008, where we had our first big public confrontation -- 2.5 Monocular 3D shape recovery of both synthetic and real objects -- 3. Symmetry in Vision, Inside and Outside of the Laboratory -- 3.1 Why and how approximate computations make visual analyses fast and perfect: the perception of slanted 2D mirror-symmetrical figures -- 3.2 How human beings perceive 2D mirror-symmetry from perspective images -- 3.3 Why 3D mirror-symmetry is more difficult than 2D symmetry -- 3.4 Updating the Ideal Observer: how human beings perceive 3D mirror-symmetry from perspective images -- 3.5 Important role of Generalized Cones in 3D shape perception: how human beings perceive 3D translational-symmetry from perspective images -- 3.6 Michael Layton's contribution to symmetry in shape perception -- 3.7 Leeuwenberg's attempt to develop a "Structural" explanation of Gestalt phenomena -- 4. Using Symmetry Is Not Simple -- 4.1 What is really going on? Examining the relationship between simplicity and likelihood -- 4.2 Clearly, simplicity is better than likelihood -- excluding degenerate views does not eliminate spurious 3D symmetrical interpretations -- 4.3 What goes with what? A new kind of Correspondence Problem -- 4.4 Everything becomes easier once symmetry is viewed as self-similarity: the first working solution of the Symmetry Correspondence Problem -- 5. A Second View Makes 3D Shape Perception Perfect -- 5.1 What we know about binocular vision and how we came to know it -- 5.2 How we worked out the binocular perception of symmetrical 3D shapes -- 5.3 How our new theory of shape perception, based on stereoacuity, accounts for old results -- 5.4 3D movies: what they are, what they want to be, and what it costs -- 5.5 Bayesian model of binocular shape perception -- 5.6 Why we could claim that our model is complete -- 6. Figure-Ground Organization, which Breaks Camouflage in Everyday Life, Permits the Veridical Recovery of a 3D Scene -- 6.1 Estimating the orientation of the ground-plane -- 6.2 How a coarse analysis of the positions and sizes of objects can be made -- 6.3 How a useful top-view representation was produced -- 6.4 Finding objects in the 2D image -- 6.5 Extracting relevant edges, grouping them and establishing symmetry correspondence -- 6.6 What can be done with a spatially-global map of a 3D scene? -- 7. What Made This Possible and What Comes Next? -- 7.1 Five Important conceptual contributions -- 7.2 Three of our technical contributions -- 7.3 Making our machine perceive and predict in dynamical environments -- 7.4 Solving the Figure-Ground Organization Problem with only a single 2D image -- 7.5 Recognizing individual objects by using a fast search of memory PSYCHOLOGY / Cognitive Psychology bisacsh PSYCHOLOGY / Physiological Psychology bisacsh Cognition fast Visual perception fast Visual perception Cognition |
title | Making a machine that sees like us |
title_auth | Making a machine that sees like us |
title_exact_search | Making a machine that sees like us |
title_full | Making a machine that sees like us Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawad, and Robert M. Steinman |
title_fullStr | Making a machine that sees like us Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawad, and Robert M. Steinman |
title_full_unstemmed | Making a machine that sees like us Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawad, and Robert M. Steinman |
title_short | Making a machine that sees like us |
title_sort | making a machine that sees like us |
topic | PSYCHOLOGY / Cognitive Psychology bisacsh PSYCHOLOGY / Physiological Psychology bisacsh Cognition fast Visual perception fast Visual perception Cognition |
topic_facet | PSYCHOLOGY / Cognitive Psychology PSYCHOLOGY / Physiological Psychology Cognition Visual perception |
url | http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=779697 |
work_keys_str_mv | AT pizlozygmunt makingamachinethatseeslikeus |