Real-time super-resolved depth estimation for self-driving cars from multiple gated images:
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Abschlussarbeit Buch |
Sprache: | English |
Veröffentlicht: |
Ulm
Universität Ulm, Institut für Mess-, Regel- und Mikrotechnik
2020
|
Schriftenreihe: | Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik
36 |
Schlagworte: | |
Online-Zugang: | Inhaltsverzeichnis Inhaltsverzeichnis |
Beschreibung: | Literaturverzeichnis Seite [145]-171 |
Beschreibung: | xv, 171 Seiten Illustrationen, Diagramme 21 cm |
ISBN: | 9783941543546 |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV047569837 | ||
003 | DE-604 | ||
005 | 20211112 | ||
007 | t | ||
008 | 211102s2020 gw a||| m||| 00||| eng d | ||
015 | |a 21,B09 |2 dnb | ||
015 | |a 21,H03 |2 dnb | ||
016 | 7 | |a 1226483569 |2 DE-101 | |
020 | |a 9783941543546 |c Broschur |9 978-3-941543-54-6 | ||
035 | |a (OCoLC)1228721754 | ||
035 | |a (DE-599)KXP1743761716 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
044 | |a gw |c XA-DE-BW | ||
049 | |a DE-573 | ||
084 | |a ST 330 |0 (DE-625)143663: |2 rvk | ||
084 | |a ZO 4650 |0 (DE-625)157741: |2 rvk | ||
084 | |a ZO 4660 |0 (DE-625)160578: |2 rvk | ||
100 | 1 | |a Gruber, Tobias |e Verfasser |0 (DE-588)1224418859 |4 aut | |
245 | 1 | 0 | |a Real-time super-resolved depth estimation for self-driving cars from multiple gated images |c Tobias Gruber |
264 | 1 | |a Ulm |b Universität Ulm, Institut für Mess-, Regel- und Mikrotechnik |c 2020 | |
300 | |a xv, 171 Seiten |b Illustrationen, Diagramme |c 21 cm | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik |v 36 | |
500 | |a Literaturverzeichnis Seite [145]-171 | ||
502 | |b Dissertation |c Universität Ulm |d 2020 | ||
546 | |a Kurzfassung in deutscher und in englischer Sprache | ||
650 | 0 | |a Pattern perception | |
650 | 0 | |a Automated vehicles | |
650 | 0 | |a Neural networks (Computer science) | |
650 | 0 | |a Computer vision | |
650 | 0 | 7 | |a Autonomes Fahrzeug |0 (DE-588)7714938-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Sensortechnik |0 (DE-588)4121663-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Kamera |0 (DE-588)4366655-3 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Tiefenbild |0 (DE-588)4493195-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Dreidimensionales maschinelles Sehen |0 (DE-588)4137757-6 |2 gnd |9 rswk-swf |
653 | |a Self-driving cars | ||
653 | |a 3D Umgebungserfassung | ||
655 | 7 | |0 (DE-588)4113937-9 |a Hochschulschrift |2 gnd-content | |
689 | 0 | 0 | |a Autonomes Fahrzeug |0 (DE-588)7714938-5 |D s |
689 | 0 | 1 | |a Dreidimensionales maschinelles Sehen |0 (DE-588)4137757-6 |D s |
689 | 0 | 2 | |a Tiefenbild |0 (DE-588)4493195-5 |D s |
689 | 0 | 3 | |a Kamera |0 (DE-588)4366655-3 |D s |
689 | 0 | 4 | |a Sensortechnik |0 (DE-588)4121663-5 |D s |
689 | 0 | |5 DE-604 | |
710 | 2 | |a Universität Ulm |0 (DE-588)30607-1 |4 dgg | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe, E-Book |a Gruber, Tobias |t Real-time super-resolved depth estimation for self-driving cars from multiple gated images |d Ulm : Universität Ulm, Institut für Mess-, Regel- und Mikroktechnik, 2020 |h 1 Online-Ressource |z 978-3-941543-55-3 |
830 | 0 | |a Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik |v 36 |w (DE-604)BV044356202 |9 36 | |
856 | 4 | 2 | |m B:DE-101 |q application/pdf |u https://d-nb.info/1226483569/04 |3 Inhaltsverzeichnis |
856 | 4 | 2 | |m DNB Datenaustausch |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032955484&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
999 | |a oai:aleph.bib-bvb.de:BVB01-032955484 |
Datensatz im Suchindex
_version_ | 1804182915343450112 |
---|---|
adam_text | CONTENTS
1
INTRODUCTION
1
1.1
MOTIVATION
...............................................................................................
1
1.2
AIM
OF
THIS
THESIS
......................................................................................
5
1.3
STRUCTURE
OF
THE
THESIS
.............................................................................
6
2
FUNDAMENTALS
7
2.1
COMPUTER
VISION
BASICS
.............................................................................
7
2.1.1
COORDINATE
SYSTEMS
...................................................................
7
2.1.2
PINHOLE
CAMERA
MODEL
................................................................
9
2.1.3
CAMERA
CALIBRATION
......................................................................
11
2.2
GATED
IMAGING
.........................................................................................
12
2.2.1
IMAGE
FORMATION
.........................................................................
13
2.2.2
3D
SCENE
RECONSTRUCTION
WITH
GATED
IMAGES
............................
17
2.3
MACHINE
LEARNING
......................................................................................
20
2.3.1
MAIN
CONCEPTS
OF
MACHINE
LEARNING
............................................
20
2.3.2
FEED-FORWARD
NEURAL
NETWORK
....................................................
20
2.3.3
TRAINING
A
NEURAL
NETWORK
..........................................................
22
2.3.4
CONVOLUTIONAL
NEURAL
NETWORKS
.................................................
23
2.3.5
FULLY
CONVOLUTIONAL
NETWORKS
....................................................
24
2.3.6
GENERATIVE
ADVERSARIAL
NETWORKS
..............................................
24
2.3.7
BAYESIAN
NEURAL
NETWORKS
..........................................................
26
3
RELATED
WORK
29
3.1
PASSIVE
3D
RECONSTRUCTION
......................................................................
30
3.1.1
MONOCULAR
DEPTH
ESTIMATION
.......................................................
30
3.1.2
STEREO
VISION
................................................................................
32
3.1.3
STRUCTURE
FROM
MOTION
................................................................
35
3.2
ACTIVE
3D
RECONSTRUCTION
..........................................................................
36
3.2.1
PULSED
LIDAR
............................................................................
36
3.2.2
TIME-OF-FLIGHT
CAMERAS
................................................................
37
3.2.3
FREQUENCY-MODULATED
CONTINUOUS
WAVE
LIDAR
.....................
38
3.2.4
3D
RANGE
GATED
RECONSTRUCTION
.................................................
38
3.2.5
DEPTH
COMPLETION
......................................................................
41
XIV
CONTENTS
3.3
DATA
SETS
...................................................................................................
42
3.3.1
REAL
DATA
SETS
................................................................................
42
3.3.2
SYNTHETIC
DATA
SETS
........................................................................
43
3.4
UNCERTAINTY
IN
MACHINE
LEARNING
...........................................................
44
4
COLLECTED
DATA
SETS
47
4.1
EXPERIMENTAL
SETUP
...................................................................................
47
4.1.1
SENSOR
SETUP
................................................................................
47
4.1.2
SENSOR
SYNCHRONIZATION
AND
CALIBRATION
.....................................
50
4.1.3
DESIGN
OF
THE
RANGE-INTENSITY
PROFILE
........................................
54
4.2
REAL
GATED
DATA
SET
...................................................................................
55
4.3
SYNTHETIC
GATED
DATA
SET
..........................................................................
57
4.4
GATED
SIMULATION
ON
KITTI
....................................................................
63
4.5
EVALUATION
DATA
SET
...................................................................................
64
5
THE
NOVEL
SYSTEM
FOR
PIXEL-BASED
GATED
DEPTH
ESTIMATION
69
5.1
INVERSE
FUNCTION
PROBLEM
..........................................................................
72
5.2
DATA
COLLECTION
..........................................................................................
73
5.3
NEURAL
NETWORK
..........................................................................................
77
5.4
LEAST-SQUARES
BASELINE
.............................................................................
78
6
THE
INNOVATIVE
SYSTEM
TO
LEARN
FULL-IMAGE
GATED
DEPTH
FROM
SPARSE
GROUND
TRUTH
81
6.1
LEARNING
SEMANTIC
CONTEXT
.......................................................................
81
6.1.1
NETWORK
ARCHITECTURE
....................................................................
83
6.1.2
ACCURACY
LOSS
................................................................................
84
6.1.3
ADVERSARIAL
LOSS
.............................................................................
84
6.2
LEARNING
FROM
SPARSE
DEPTH
INPUT
...........................................................
85
6.2.1
MULTI-SCALE
LOSS
.............................................................................
86
6.2.2
SMOOTH
LOSS
...................................................................................
87
6.2.3
ADVERSARIAL
TRANSFER
....................................................................
87
6.2.4
GROUND-TRUTH
COMPLETION
...........................................................
88
6.3
LEARNING
UNCERTAINTY
................................................................................
88
7
SYSTEM
EVALUATION
91
7.1
EXPERIMENTAL
SETUP
...................................................................................
92
7.1.1
IMPLEMENTATION
.............................................................................
92
7.1.2
REFERENCE
METHODS
.......................................................................
92
7.1.3
METRICS
.........................................................................................
96
7.2
EXPERIMENTAL
INVESTIGATION
OF
THE
PIXEL-BASED
SYSTEM
.........................
98
7.2.1
HYPERPARAMETER
OPTIMIZATION
....................................................
98
7.2.2
SNR
FILTER
......................................................................................
99
7.2.3
BASELINE
COMPARISON
.......................................................................
100
CONTENTS
XV
7.3
EXPERIMENTAL
INVESTIGATION
OF
THE
IMAGE-BASED
SYSTEM
.........................
101
7.3.1
ABLATION
STUDIES
ON
G
ATED
R
EAL
.................................................
102
7.3.2
EVALUATION
ON
G
ATED
R
EAL
...........................................................
108
7.3.3
EVALUATION
ON
G
ATED
K
ITTI
...........................................................
113
7.4
3D
RECONSTRUCTION
UNDER
ADVERSE
WEATHER
CONDITIONS
............................
117
7.5
DISCUSSION
......................................................................................................
124
8
CONCLUSION
AND
FUTURE
WORK
127
8.1
SUMMARY
AND
CONCLUSION
..........................................................................
127
8.2
LIMITATIONS
AND
FUTURE
WORK
.......................................................................
130
A
SUPERVISED
THESES
133
B
OWN
PUBLICATIONS
135
ACRONYMS
137
LIST
OF
SYMBOLS
141
BIBLIOGRAPHY
145
|
adam_txt |
CONTENTS
1
INTRODUCTION
1
1.1
MOTIVATION
.
1
1.2
AIM
OF
THIS
THESIS
.
5
1.3
STRUCTURE
OF
THE
THESIS
.
6
2
FUNDAMENTALS
7
2.1
COMPUTER
VISION
BASICS
.
7
2.1.1
COORDINATE
SYSTEMS
.
7
2.1.2
PINHOLE
CAMERA
MODEL
.
9
2.1.3
CAMERA
CALIBRATION
.
11
2.2
GATED
IMAGING
.
12
2.2.1
IMAGE
FORMATION
.
13
2.2.2
3D
SCENE
RECONSTRUCTION
WITH
GATED
IMAGES
.
17
2.3
MACHINE
LEARNING
.
20
2.3.1
MAIN
CONCEPTS
OF
MACHINE
LEARNING
.
20
2.3.2
FEED-FORWARD
NEURAL
NETWORK
.
20
2.3.3
TRAINING
A
NEURAL
NETWORK
.
22
2.3.4
CONVOLUTIONAL
NEURAL
NETWORKS
.
23
2.3.5
FULLY
CONVOLUTIONAL
NETWORKS
.
24
2.3.6
GENERATIVE
ADVERSARIAL
NETWORKS
.
24
2.3.7
BAYESIAN
NEURAL
NETWORKS
.
26
3
RELATED
WORK
29
3.1
PASSIVE
3D
RECONSTRUCTION
.
30
3.1.1
MONOCULAR
DEPTH
ESTIMATION
.
30
3.1.2
STEREO
VISION
.
32
3.1.3
STRUCTURE
FROM
MOTION
.
35
3.2
ACTIVE
3D
RECONSTRUCTION
.
36
3.2.1
PULSED
LIDAR
.
36
3.2.2
TIME-OF-FLIGHT
CAMERAS
.
37
3.2.3
FREQUENCY-MODULATED
CONTINUOUS
WAVE
LIDAR
.
38
3.2.4
3D
RANGE
GATED
RECONSTRUCTION
.
38
3.2.5
DEPTH
COMPLETION
.
41
XIV
CONTENTS
3.3
DATA
SETS
.
42
3.3.1
REAL
DATA
SETS
.
42
3.3.2
SYNTHETIC
DATA
SETS
.
43
3.4
UNCERTAINTY
IN
MACHINE
LEARNING
.
44
4
COLLECTED
DATA
SETS
47
4.1
EXPERIMENTAL
SETUP
.
47
4.1.1
SENSOR
SETUP
.
47
4.1.2
SENSOR
SYNCHRONIZATION
AND
CALIBRATION
.
50
4.1.3
DESIGN
OF
THE
RANGE-INTENSITY
PROFILE
.
54
4.2
REAL
GATED
DATA
SET
.
55
4.3
SYNTHETIC
GATED
DATA
SET
.
57
4.4
GATED
SIMULATION
ON
KITTI
.
63
4.5
EVALUATION
DATA
SET
.
64
5
THE
NOVEL
SYSTEM
FOR
PIXEL-BASED
GATED
DEPTH
ESTIMATION
69
5.1
INVERSE
FUNCTION
PROBLEM
.
72
5.2
DATA
COLLECTION
.
73
5.3
NEURAL
NETWORK
.
77
5.4
LEAST-SQUARES
BASELINE
.
78
6
THE
INNOVATIVE
SYSTEM
TO
LEARN
FULL-IMAGE
GATED
DEPTH
FROM
SPARSE
GROUND
TRUTH
81
6.1
LEARNING
SEMANTIC
CONTEXT
.
81
6.1.1
NETWORK
ARCHITECTURE
.
83
6.1.2
ACCURACY
LOSS
.
84
6.1.3
ADVERSARIAL
LOSS
.
84
6.2
LEARNING
FROM
SPARSE
DEPTH
INPUT
.
85
6.2.1
MULTI-SCALE
LOSS
.
86
6.2.2
SMOOTH
LOSS
.
87
6.2.3
ADVERSARIAL
TRANSFER
.
87
6.2.4
GROUND-TRUTH
COMPLETION
.
88
6.3
LEARNING
UNCERTAINTY
.
88
7
SYSTEM
EVALUATION
91
7.1
EXPERIMENTAL
SETUP
.
92
7.1.1
IMPLEMENTATION
.
92
7.1.2
REFERENCE
METHODS
.
92
7.1.3
METRICS
.
96
7.2
EXPERIMENTAL
INVESTIGATION
OF
THE
PIXEL-BASED
SYSTEM
.
98
7.2.1
HYPERPARAMETER
OPTIMIZATION
.
98
7.2.2
SNR
FILTER
.
99
7.2.3
BASELINE
COMPARISON
.
100
CONTENTS
XV
7.3
EXPERIMENTAL
INVESTIGATION
OF
THE
IMAGE-BASED
SYSTEM
.
101
7.3.1
ABLATION
STUDIES
ON
G
ATED
R
EAL
.
102
7.3.2
EVALUATION
ON
G
ATED
R
EAL
.
108
7.3.3
EVALUATION
ON
G
ATED
K
ITTI
.
113
7.4
3D
RECONSTRUCTION
UNDER
ADVERSE
WEATHER
CONDITIONS
.
117
7.5
DISCUSSION
.
124
8
CONCLUSION
AND
FUTURE
WORK
127
8.1
SUMMARY
AND
CONCLUSION
.
127
8.2
LIMITATIONS
AND
FUTURE
WORK
.
130
A
SUPERVISED
THESES
133
B
OWN
PUBLICATIONS
135
ACRONYMS
137
LIST
OF
SYMBOLS
141
BIBLIOGRAPHY
145 |
any_adam_object | 1 |
any_adam_object_boolean | 1 |
author | Gruber, Tobias |
author_GND | (DE-588)1224418859 |
author_facet | Gruber, Tobias |
author_role | aut |
author_sort | Gruber, Tobias |
author_variant | t g tg |
building | Verbundindex |
bvnumber | BV047569837 |
classification_rvk | ST 330 ZO 4650 ZO 4660 |
ctrlnum | (OCoLC)1228721754 (DE-599)KXP1743761716 |
discipline | Informatik Verkehr / Transport |
discipline_str_mv | Informatik Verkehr / Transport |
format | Thesis Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03092nam a2200661 cb4500</leader><controlfield tag="001">BV047569837</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20211112 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">211102s2020 gw a||| m||| 00||| eng d</controlfield><datafield tag="015" ind1=" " ind2=" "><subfield code="a">21,B09</subfield><subfield code="2">dnb</subfield></datafield><datafield tag="015" ind1=" " ind2=" "><subfield code="a">21,H03</subfield><subfield code="2">dnb</subfield></datafield><datafield tag="016" ind1="7" ind2=" "><subfield code="a">1226483569</subfield><subfield code="2">DE-101</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783941543546</subfield><subfield code="c">Broschur</subfield><subfield code="9">978-3-941543-54-6</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1228721754</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)KXP1743761716</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="044" ind1=" " ind2=" "><subfield code="a">gw</subfield><subfield code="c">XA-DE-BW</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-573</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 330</subfield><subfield code="0">(DE-625)143663:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ZO 4650</subfield><subfield code="0">(DE-625)157741:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ZO 4660</subfield><subfield code="0">(DE-625)160578:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Gruber, Tobias</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)1224418859</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Real-time super-resolved depth estimation for self-driving cars from multiple gated images</subfield><subfield code="c">Tobias Gruber</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Ulm</subfield><subfield code="b">Universität Ulm, Institut für Mess-, Regel- und Mikrotechnik</subfield><subfield code="c">2020</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xv, 171 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield><subfield code="c">21 cm</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik</subfield><subfield code="v">36</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Literaturverzeichnis Seite [145]-171</subfield></datafield><datafield tag="502" ind1=" " ind2=" "><subfield code="b">Dissertation</subfield><subfield code="c">Universität Ulm</subfield><subfield code="d">2020</subfield></datafield><datafield tag="546" ind1=" " ind2=" "><subfield code="a">Kurzfassung in deutscher und in englischer Sprache</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Pattern perception</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Automated vehicles</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Neural networks (Computer science)</subfield></datafield><datafield tag="650" ind1=" " ind2="0"><subfield code="a">Computer vision</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Autonomes Fahrzeug</subfield><subfield code="0">(DE-588)7714938-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Sensortechnik</subfield><subfield code="0">(DE-588)4121663-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Kamera</subfield><subfield code="0">(DE-588)4366655-3</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Tiefenbild</subfield><subfield code="0">(DE-588)4493195-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Dreidimensionales maschinelles Sehen</subfield><subfield code="0">(DE-588)4137757-6</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Self-driving cars</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">3D Umgebungserfassung</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="0">(DE-588)4113937-9</subfield><subfield code="a">Hochschulschrift</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Autonomes Fahrzeug</subfield><subfield code="0">(DE-588)7714938-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Dreidimensionales maschinelles Sehen</subfield><subfield code="0">(DE-588)4137757-6</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Tiefenbild</subfield><subfield code="0">(DE-588)4493195-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Kamera</subfield><subfield code="0">(DE-588)4366655-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="4"><subfield code="a">Sensortechnik</subfield><subfield code="0">(DE-588)4121663-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="710" ind1="2" ind2=" "><subfield code="a">Universität Ulm</subfield><subfield code="0">(DE-588)30607-1</subfield><subfield code="4">dgg</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe, E-Book</subfield><subfield code="a">Gruber, Tobias</subfield><subfield code="t">Real-time super-resolved depth estimation for self-driving cars from multiple gated images</subfield><subfield code="d">Ulm : Universität Ulm, Institut für Mess-, Regel- und Mikroktechnik, 2020</subfield><subfield code="h">1 Online-Ressource</subfield><subfield code="z">978-3-941543-55-3</subfield></datafield><datafield tag="830" ind1=" " ind2="0"><subfield code="a">Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik</subfield><subfield code="v">36</subfield><subfield code="w">(DE-604)BV044356202</subfield><subfield code="9">36</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">B:DE-101</subfield><subfield code="q">application/pdf</subfield><subfield code="u">https://d-nb.info/1226483569/04</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">DNB Datenaustausch</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032955484&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-032955484</subfield></datafield></record></collection> |
genre | (DE-588)4113937-9 Hochschulschrift gnd-content |
genre_facet | Hochschulschrift |
id | DE-604.BV047569837 |
illustrated | Illustrated |
index_date | 2024-07-03T18:29:58Z |
indexdate | 2024-07-10T09:15:08Z |
institution | BVB |
institution_GND | (DE-588)30607-1 |
isbn | 9783941543546 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-032955484 |
oclc_num | 1228721754 |
open_access_boolean | |
owner | DE-573 |
owner_facet | DE-573 |
physical | xv, 171 Seiten Illustrationen, Diagramme 21 cm |
publishDate | 2020 |
publishDateSearch | 2020 |
publishDateSort | 2020 |
publisher | Universität Ulm, Institut für Mess-, Regel- und Mikrotechnik |
record_format | marc |
series | Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik |
series2 | Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik |
spelling | Gruber, Tobias Verfasser (DE-588)1224418859 aut Real-time super-resolved depth estimation for self-driving cars from multiple gated images Tobias Gruber Ulm Universität Ulm, Institut für Mess-, Regel- und Mikrotechnik 2020 xv, 171 Seiten Illustrationen, Diagramme 21 cm txt rdacontent n rdamedia nc rdacarrier Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik 36 Literaturverzeichnis Seite [145]-171 Dissertation Universität Ulm 2020 Kurzfassung in deutscher und in englischer Sprache Pattern perception Automated vehicles Neural networks (Computer science) Computer vision Autonomes Fahrzeug (DE-588)7714938-5 gnd rswk-swf Sensortechnik (DE-588)4121663-5 gnd rswk-swf Kamera (DE-588)4366655-3 gnd rswk-swf Tiefenbild (DE-588)4493195-5 gnd rswk-swf Dreidimensionales maschinelles Sehen (DE-588)4137757-6 gnd rswk-swf Self-driving cars 3D Umgebungserfassung (DE-588)4113937-9 Hochschulschrift gnd-content Autonomes Fahrzeug (DE-588)7714938-5 s Dreidimensionales maschinelles Sehen (DE-588)4137757-6 s Tiefenbild (DE-588)4493195-5 s Kamera (DE-588)4366655-3 s Sensortechnik (DE-588)4121663-5 s DE-604 Universität Ulm (DE-588)30607-1 dgg Erscheint auch als Online-Ausgabe, E-Book Gruber, Tobias Real-time super-resolved depth estimation for self-driving cars from multiple gated images Ulm : Universität Ulm, Institut für Mess-, Regel- und Mikroktechnik, 2020 1 Online-Ressource 978-3-941543-55-3 Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik 36 (DE-604)BV044356202 36 B:DE-101 application/pdf https://d-nb.info/1226483569/04 Inhaltsverzeichnis DNB Datenaustausch application/pdf http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032955484&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA Inhaltsverzeichnis |
spellingShingle | Gruber, Tobias Real-time super-resolved depth estimation for self-driving cars from multiple gated images Schriftenreihe des Instituts für Mess-, Regel- und Mikrotechnik Pattern perception Automated vehicles Neural networks (Computer science) Computer vision Autonomes Fahrzeug (DE-588)7714938-5 gnd Sensortechnik (DE-588)4121663-5 gnd Kamera (DE-588)4366655-3 gnd Tiefenbild (DE-588)4493195-5 gnd Dreidimensionales maschinelles Sehen (DE-588)4137757-6 gnd |
subject_GND | (DE-588)7714938-5 (DE-588)4121663-5 (DE-588)4366655-3 (DE-588)4493195-5 (DE-588)4137757-6 (DE-588)4113937-9 |
title | Real-time super-resolved depth estimation for self-driving cars from multiple gated images |
title_auth | Real-time super-resolved depth estimation for self-driving cars from multiple gated images |
title_exact_search | Real-time super-resolved depth estimation for self-driving cars from multiple gated images |
title_exact_search_txtP | Real-time super-resolved depth estimation for self-driving cars from multiple gated images |
title_full | Real-time super-resolved depth estimation for self-driving cars from multiple gated images Tobias Gruber |
title_fullStr | Real-time super-resolved depth estimation for self-driving cars from multiple gated images Tobias Gruber |
title_full_unstemmed | Real-time super-resolved depth estimation for self-driving cars from multiple gated images Tobias Gruber |
title_short | Real-time super-resolved depth estimation for self-driving cars from multiple gated images |
title_sort | real time super resolved depth estimation for self driving cars from multiple gated images |
topic | Pattern perception Automated vehicles Neural networks (Computer science) Computer vision Autonomes Fahrzeug (DE-588)7714938-5 gnd Sensortechnik (DE-588)4121663-5 gnd Kamera (DE-588)4366655-3 gnd Tiefenbild (DE-588)4493195-5 gnd Dreidimensionales maschinelles Sehen (DE-588)4137757-6 gnd |
topic_facet | Pattern perception Automated vehicles Neural networks (Computer science) Computer vision Autonomes Fahrzeug Sensortechnik Kamera Tiefenbild Dreidimensionales maschinelles Sehen Hochschulschrift |
url | https://d-nb.info/1226483569/04 http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=032955484&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
volume_link | (DE-604)BV044356202 |
work_keys_str_mv | AT grubertobias realtimesuperresolveddepthestimationforselfdrivingcarsfrommultiplegatedimages AT universitatulm realtimesuperresolveddepthestimationforselfdrivingcarsfrommultiplegatedimages |
Es ist kein Print-Exemplar vorhanden.
Inhaltsverzeichnis