Region-based feature interpretation for recognizing 3D models in 2D images:
Abstract: "In model-based vision, features found in a two- dimensional image are matched to three-dimensional model features such that, from some view, the model features appear very much like the image features. The goal is to find the feature matches and rigid model transformations (or poses)...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Abschlussarbeit Buch |
Sprache: | English |
Veröffentlicht: |
[Cambridge, Mass.]
MIT Artificial Intelligence Laboratory
1991
|
Schlagworte: | |
Zusammenfassung: | Abstract: "In model-based vision, features found in a two- dimensional image are matched to three-dimensional model features such that, from some view, the model features appear very much like the image features. The goal is to find the feature matches and rigid model transformations (or poses) that produce sufficiently good alignment. Because of variations in the image due to illumination, viewpoint, and neighboring objects, it is virtually impossible to judge individual feature matches independently. Their information must be combined in order to form a rich enough hypothesis to test. However, there are a huge number of possible ways to match sets of model features to sets of image features All subsets of the image features must be formed, and matched to every possible subset of the model features. Then, within each subset match, all permutations of matches must be considered. Many strategies have been explored to reduce the search and more efficiently find a set of matches that satisfy the constraints imposed by the model's shape. But, in addition to these constraints, there are important match-independent constraints derived from general information about the world, the imaging process, and the library of models as a whole. These constraints are less strict than match-dependent shape constraints, but they can be efficiently applied without the combinatorics of matching In this thesis, I present two specific modules that demonstrate the utility of match-independent constraints. The first is a region-based grouping mechanism that drastically reduces the combinatorics of choosing subsets of features. Instead of all subsets, it finds groups of image features that are likely to come from a single object (without hypothesizing which object). Then in order to address the combinatorics of matching within each subset, the second module, interpretive matching, makes explicit hypotheses about occlusion and instabilities in the image features. This module also begins to make matches with the model features, and applies only those match-dependent constraints that are independent of the model pose |
Beschreibung: | Includes bibliographical references |
Beschreibung: | 125 S. Ill. 28 cm |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV035037365 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | t | ||
008 | 080904s1991 a||| m||| 00||| eng d | ||
035 | |a (OCoLC)26271514 | ||
035 | |a (DE-599)BVBBV035037365 | ||
040 | |a DE-604 |b ger |e rakwb | ||
041 | 0 | |a eng | |
049 | |a DE-91G | ||
088 | |a AI TR 1307 | ||
100 | 1 | |a Clemens, David Taylor |e Verfasser |4 aut | |
245 | 1 | 0 | |a Region-based feature interpretation for recognizing 3D models in 2D images |c David T. Clemens |
246 | 1 | 3 | |a AI TR 1307 |
264 | 1 | |a [Cambridge, Mass.] |b MIT Artificial Intelligence Laboratory |c 1991 | |
300 | |a 125 S. |b Ill. |c 28 cm | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
500 | |a Includes bibliographical references | ||
502 | |a Zugl.: Cambridge, Mass., Massachusetts Institute of Technology, Diss., 1991 | ||
520 | 3 | |a Abstract: "In model-based vision, features found in a two- dimensional image are matched to three-dimensional model features such that, from some view, the model features appear very much like the image features. The goal is to find the feature matches and rigid model transformations (or poses) that produce sufficiently good alignment. Because of variations in the image due to illumination, viewpoint, and neighboring objects, it is virtually impossible to judge individual feature matches independently. Their information must be combined in order to form a rich enough hypothesis to test. However, there are a huge number of possible ways to match sets of model features to sets of image features | |
520 | 3 | |a All subsets of the image features must be formed, and matched to every possible subset of the model features. Then, within each subset match, all permutations of matches must be considered. Many strategies have been explored to reduce the search and more efficiently find a set of matches that satisfy the constraints imposed by the model's shape. But, in addition to these constraints, there are important match-independent constraints derived from general information about the world, the imaging process, and the library of models as a whole. These constraints are less strict than match-dependent shape constraints, but they can be efficiently applied without the combinatorics of matching | |
520 | 3 | |a In this thesis, I present two specific modules that demonstrate the utility of match-independent constraints. The first is a region-based grouping mechanism that drastically reduces the combinatorics of choosing subsets of features. Instead of all subsets, it finds groups of image features that are likely to come from a single object (without hypothesizing which object). Then in order to address the combinatorics of matching within each subset, the second module, interpretive matching, makes explicit hypotheses about occlusion and instabilities in the image features. This module also begins to make matches with the model features, and applies only those match-dependent constraints that are independent of the model pose | |
650 | 4 | |a Computer vision | |
650 | 4 | |a Computer vision | |
655 | 7 | |0 (DE-588)4113937-9 |a Hochschulschrift |2 gnd-content | |
999 | |a oai:aleph.bib-bvb.de:BVB01-016706247 |
Datensatz im Suchindex
_version_ | 1804137970511380480 |
---|---|
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Clemens, David Taylor |
author_facet | Clemens, David Taylor |
author_role | aut |
author_sort | Clemens, David Taylor |
author_variant | d t c dt dtc |
building | Verbundindex |
bvnumber | BV035037365 |
ctrlnum | (OCoLC)26271514 (DE-599)BVBBV035037365 |
format | Thesis Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03238nam a2200361 c 4500</leader><controlfield tag="001">BV035037365</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">080904s1991 a||| m||| 00||| eng d</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)26271514</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV035037365</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91G</subfield></datafield><datafield tag="088" ind1=" " ind2=" "><subfield code="a">AI TR 1307</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Clemens, David Taylor</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Region-based feature interpretation for recognizing 3D models in 2D images</subfield><subfield code="c">David T. Clemens</subfield></datafield><datafield tag="246" ind1="1" ind2="3"><subfield code="a">AI TR 1307</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">[Cambridge, Mass.]</subfield><subfield code="b">MIT Artificial Intelligence Laboratory</subfield><subfield code="c">1991</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">125 S.</subfield><subfield code="b">Ill.</subfield><subfield code="c">28 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Includes bibliographical references</subfield></datafield><datafield tag="502" ind1=" " ind2=" "><subfield code="a">Zugl.: Cambridge, Mass., Massachusetts Institute of Technology, Diss., 1991</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Abstract: "In model-based vision, features found in a two- dimensional image are matched to three-dimensional model features such that, from some view, the model features appear very much like the image features. The goal is to find the feature matches and rigid model transformations (or poses) that produce sufficiently good alignment. Because of variations in the image due to illumination, viewpoint, and neighboring objects, it is virtually impossible to judge individual feature matches independently. Their information must be combined in order to form a rich enough hypothesis to test. However, there are a huge number of possible ways to match sets of model features to sets of image features</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">All subsets of the image features must be formed, and matched to every possible subset of the model features. Then, within each subset match, all permutations of matches must be considered. Many strategies have been explored to reduce the search and more efficiently find a set of matches that satisfy the constraints imposed by the model's shape. But, in addition to these constraints, there are important match-independent constraints derived from general information about the world, the imaging process, and the library of models as a whole. These constraints are less strict than match-dependent shape constraints, but they can be efficiently applied without the combinatorics of matching</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">In this thesis, I present two specific modules that demonstrate the utility of match-independent constraints. The first is a region-based grouping mechanism that drastically reduces the combinatorics of choosing subsets of features. Instead of all subsets, it finds groups of image features that are likely to come from a single object (without hypothesizing which object). Then in order to address the combinatorics of matching within each subset, the second module, interpretive matching, makes explicit hypotheses about occlusion and instabilities in the image features. This module also begins to make matches with the model features, and applies only those match-dependent constraints that are independent of the model pose</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer vision</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Computer vision</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="0">(DE-588)4113937-9</subfield><subfield code="a">Hochschulschrift</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-016706247</subfield></datafield></record></collection> |
genre | (DE-588)4113937-9 Hochschulschrift gnd-content |
genre_facet | Hochschulschrift |
id | DE-604.BV035037365 |
illustrated | Illustrated |
index_date | 2024-07-02T21:52:02Z |
indexdate | 2024-07-09T21:20:45Z |
institution | BVB |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-016706247 |
oclc_num | 26271514 |
open_access_boolean | |
owner | DE-91G DE-BY-TUM |
owner_facet | DE-91G DE-BY-TUM |
physical | 125 S. Ill. 28 cm |
publishDate | 1991 |
publishDateSearch | 1991 |
publishDateSort | 1991 |
publisher | MIT Artificial Intelligence Laboratory |
record_format | marc |
spelling | Clemens, David Taylor Verfasser aut Region-based feature interpretation for recognizing 3D models in 2D images David T. Clemens AI TR 1307 [Cambridge, Mass.] MIT Artificial Intelligence Laboratory 1991 125 S. Ill. 28 cm txt rdacontent n rdamedia nc rdacarrier Includes bibliographical references Zugl.: Cambridge, Mass., Massachusetts Institute of Technology, Diss., 1991 Abstract: "In model-based vision, features found in a two- dimensional image are matched to three-dimensional model features such that, from some view, the model features appear very much like the image features. The goal is to find the feature matches and rigid model transformations (or poses) that produce sufficiently good alignment. Because of variations in the image due to illumination, viewpoint, and neighboring objects, it is virtually impossible to judge individual feature matches independently. Their information must be combined in order to form a rich enough hypothesis to test. However, there are a huge number of possible ways to match sets of model features to sets of image features All subsets of the image features must be formed, and matched to every possible subset of the model features. Then, within each subset match, all permutations of matches must be considered. Many strategies have been explored to reduce the search and more efficiently find a set of matches that satisfy the constraints imposed by the model's shape. But, in addition to these constraints, there are important match-independent constraints derived from general information about the world, the imaging process, and the library of models as a whole. These constraints are less strict than match-dependent shape constraints, but they can be efficiently applied without the combinatorics of matching In this thesis, I present two specific modules that demonstrate the utility of match-independent constraints. The first is a region-based grouping mechanism that drastically reduces the combinatorics of choosing subsets of features. Instead of all subsets, it finds groups of image features that are likely to come from a single object (without hypothesizing which object). Then in order to address the combinatorics of matching within each subset, the second module, interpretive matching, makes explicit hypotheses about occlusion and instabilities in the image features. This module also begins to make matches with the model features, and applies only those match-dependent constraints that are independent of the model pose Computer vision (DE-588)4113937-9 Hochschulschrift gnd-content |
spellingShingle | Clemens, David Taylor Region-based feature interpretation for recognizing 3D models in 2D images Computer vision |
subject_GND | (DE-588)4113937-9 |
title | Region-based feature interpretation for recognizing 3D models in 2D images |
title_alt | AI TR 1307 |
title_auth | Region-based feature interpretation for recognizing 3D models in 2D images |
title_exact_search | Region-based feature interpretation for recognizing 3D models in 2D images |
title_exact_search_txtP | Region-based feature interpretation for recognizing 3D models in 2D images |
title_full | Region-based feature interpretation for recognizing 3D models in 2D images David T. Clemens |
title_fullStr | Region-based feature interpretation for recognizing 3D models in 2D images David T. Clemens |
title_full_unstemmed | Region-based feature interpretation for recognizing 3D models in 2D images David T. Clemens |
title_short | Region-based feature interpretation for recognizing 3D models in 2D images |
title_sort | region based feature interpretation for recognizing 3d models in 2d images |
topic | Computer vision |
topic_facet | Computer vision Hochschulschrift |
work_keys_str_mv | AT clemensdavidtaylor regionbasedfeatureinterpretationforrecognizing3dmodelsin2dimages AT clemensdavidtaylor aitr1307 |