Benchmarking, Measuring, and Optimizing: First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Cham
Springer International Publishing AG
2019
|
Schriftenreihe: | Lecture Notes in Computer Science Ser.
v.11459 |
Schlagworte: | |
Online-Zugang: | DE-2070s |
Beschreibung: | Description based on publisher supplied metadata and other sources |
Beschreibung: | 1 online resource (268 pages) |
ISBN: | 9783030328139 |
Internformat
MARC
LEADER | 00000nam a2200000zcb4500 | ||
---|---|---|---|
001 | BV047692623 | ||
003 | DE-604 | ||
007 | cr|uuu---uuuuu | ||
008 | 220119s2019 xx o|||| 00||| eng d | ||
020 | |a 9783030328139 |9 978-3-030-32813-9 | ||
035 | |a (ZDB-30-PQE)EBC5946128 | ||
035 | |a (ZDB-30-PAD)EBC5946128 | ||
035 | |a (ZDB-89-EBL)EBL5946128 | ||
035 | |a (OCoLC)1125107641 | ||
035 | |a (DE-599)BVBBV047692623 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-2070s | ||
082 | 0 | |a 690.80683999999997 | |
100 | 1 | |a Zheng, Chen |e Verfasser |4 aut | |
245 | 1 | 0 | |a Benchmarking, Measuring, and Optimizing |b First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
264 | 1 | |a Cham |b Springer International Publishing AG |c 2019 | |
264 | 4 | |c ©2019 | |
300 | |a 1 online resource (268 pages) | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 0 | |a Lecture Notes in Computer Science Ser. |v v.11459 | |
500 | |a Description based on publisher supplied metadata and other sources | ||
505 | 8 | |a Intro -- BenchCouncil: Benchmarking and Promoting Innovative Techniques -- Organization -- Contents -- AI Benchmarking -- AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking -- 1 Introduction -- 2 Related Work -- 3 Datacenter AI Benchmark Suite-AIBench -- 3.1 Datacenter AI Micro Benchmarks -- 3.2 Datacenter AI Component Benchmarks -- 3.3 Application Benchmarks -- 3.4 AI Competition -- 4 Conclusion -- References -- HPC AI500: A Benchmark Suite for HPC AI Systems -- 1 Introduction -- 2 Deep Learning in Scientific Computing -- 2.1 Extreme Weather Analysis -- 2.2 High Energy Physics -- 2.3 Cosmology -- 2.4 Summary -- 3 Benchmarking Methodology and Decisions -- 3.1 Methodology -- 3.2 The Selected Datasets -- 3.3 The Selected Workloads -- 3.4 Metrics -- 4 Reference Implementation -- 4.1 Component Benchmarks -- 4.2 Micro Benchmarks -- 5 Conclusion -- References -- Edge AIBench: Towards Comprehensive End-to-End Edge Computing Benchmarking -- 1 Introduction -- 2 Related Work -- 3 The Summary of Edge AIBench -- 3.1 ICU Patient Monitor -- 3.2 Surveillance Camera -- 3.3 Smart Home -- 3.4 Autonomous Vehicle -- 3.5 A Federated Learning Framework Testbed -- 4 Conclusion -- References -- AIoT Bench: Towards Comprehensive Benchmarking Mobile and Embedded Device Intelligence -- 1 Introduction -- 2 Benchmarking Requirements -- 3 AIoT Bench -- 4 Related Work -- 5 Conclusion -- References -- A Survey on Deep Learning Benchmarks: Do We Still Need New Ones? -- 1 Introduction -- 2 A Survey on Deep Learning Benchmarks -- 2.1 Stanford DAWNBench -- 2.2 Baidu DeepBench -- 2.3 Facebook AI Performance Evaluation Platform -- 2.4 ICT BigDataBench -- 2.5 Other Benchmarks -- 3 Discussion -- 3.1 Benchmark Comparison -- 3.2 Observations -- 4 Conclusion and Future Work -- References -- Cloud -- Benchmarking VM Startup Time in the Cloud -- 1 Introduction | |
505 | 8 | |a 2 Related Works -- 3 Methodology -- 3.1 Environment Setup -- 3.2 Algorithm -- 4 Result -- 4.1 By Instance Type -- 4.2 By Time of the Day -- 4.3 By Instance Location -- 4.4 By Cluster -- 5 Conclusions and Future Work -- References -- An Open Source Cloud-Based NoSQL and NewSQL Database Benchmarking Platform for IoT Data -- Abstract -- 1 Introduction -- 2 Background -- 2.1 NewSQL -- 2.2 NoSQL -- 2.3 MongoDB -- 2.4 VoltDB -- 2.5 Apache Kafka -- 2.6 Cloud Computing -- 3 Benchmarking Framework -- 3.1 Framework Components -- 3.2 Architecture -- 3.3 Data Generation and Consumption Algorithms -- 4 Experiments -- 4.1 System Configuration, Sensor Data Structure, and Formats -- 4.2 Experiment I: Data Injection with Different Volume and Velocity -- 4.3 Experiment II: Transactional Data Processing on High Volume of Data -- 4.4 Experiment III: Analytical Data Processing on High Volume of Data -- 4.5 Findings from Experiments -- 5 Related Work -- 6 Conclusion -- Acknowledgment -- References -- Scalability Evaluation of Big Data Processing Services in Clouds -- 1 Introduction -- 2 Related Work -- 2.1 Big Data Benchmarks -- 2.2 Scalability Evaluation of Big Data Processing Systems -- 3 Evaluation Model -- 4 Experiments -- 4.1 Experiment Environment -- 4.2 Scale-Out Analysis -- 4.3 Scale-Up Experiment -- 4.4 Experimental Results -- 5 Conclusion -- References -- PAIE: A Personal Activity Intelligence Estimator in the Cloud -- 1 Introduction -- 2 Related Work -- 3 Overview of PAIE -- 4 Statistic Issues -- 4.1 PAI Computing Mechanism -- 4.2 Statistical Modeling -- 4.3 PAI Estimating and Error Bounding -- 5 Implementing over Storm -- 6 Performance Evaluation -- 6.1 Experiment Methodology -- 6.2 Performance Analysis -- 6.3 Scalability Evaluation -- 7 Conclusions -- References -- DCMIX: Generating Mixed Workloads for the Cloud Data Center -- 1 Introduction -- 2 Related Work | |
505 | 8 | |a 3 DCMIX -- 3.1 Workloads -- 3.2 Mixed Workload Generator -- 4 System Entropy -- 5 Experiment and Experimental Analysis -- 5.1 Experimental Configurations and Methodology -- 5.2 Experiment Results and Observations -- 5.3 Summary -- 6 Conclusion -- References -- Machine-Learning Based Spark and Hadoop Workload Classification Using Container Performance Patterns -- 1 Introduction -- 1.1 Background -- 1.2 Problem -- 1.3 Limitations of Previous Approaches -- 1.4 Our Contribution -- 1.5 Resource Managers and Containers -- 2 Evaluation Methodology -- 2.1 Container Performance Metrics -- 2.2 Workloads and Workload Transitions -- 2.3 Parameter Settings -- 2.4 Hardware and Software -- 3 Results -- 3.1 Steady State Workload Characteristics -- 3.2 Dynamic Workload Characteristics - Workload Transitions -- 4 Identifying and Classifying Workloads -- 5 Detecting Workload Transitions -- 6 Relative Value and Importance of Container Performance Metrics -- 7 Conclusion -- References -- Testing Raft-Replicated Database Systems -- 1 Introduction -- 2 Background -- 2.1 Replicated State Machines -- 2.2 Raft Overview -- 3 System Model -- 4 Evaluation Metrics -- 4.1 Correctness -- 4.2 Performance -- 4.3 Scalability -- 5 Test Dimensions -- 5.1 Fault Type -- 5.2 Data Operation Type -- 5.3 System Configuration -- 6 Experiments -- 6.1 Experimental Setups -- 6.2 Recovery Time -- 6.3 Throughput and Latency -- 6.4 Stability -- 7 Related Work -- 8 Conclusion -- References -- Big Data -- Benchmarking for Transaction Processing Database Systems in Big Data Era -- 1 Introuduction -- 2 Requirements and Challenges -- 2.1 Data Generation -- 2.2 Workload Generation -- 2.3 Measurement Definition -- 2.4 Others -- 3 PeakBench: Benchmarking Transaction Processing Database Systems on Intensive Workloads -- 3.1 Business Description -- 3.2 Implementation of Benchmark Tool -- 3.3 Workloads | |
505 | 8 | |a 4 Test of PeakBench -- 5 Related Work -- 6 Conclusion -- References -- UMDISW: A Universal Multi-Domain Intelligent Scientific Workflow Framework for the Whole Life Cycle of Scientific Data -- 1 Introduction -- 2 The Status of UMDISW in System Architecture -- 3 The Model of UMDISW -- 3.1 Workflow and Task -- 3.2 Data Flow and Information Flow -- 3.3 Data Node and Algorithm Node -- 3.4 Example -- 4 The Structure and Execution of UMDISW -- 4.1 Running Service Layer -- 4.2 Workflow Execution Layer -- 4.3 Data Resource Layer -- 5 The Application Scenario of UMDISW -- 5.1 Fully Automated Workflow -- 5.2 Semi-custom Workflow -- 5.3 Fully Custom Workflow -- 6 The Implementation of UMDISW -- 7 Conclusion -- References -- MiDBench: Multimodel Industrial Big Data Benchmark -- 1 Introduction -- 2 Big Data Benchmarking Requirements -- 3 Related Work -- 4 Our Benchmarking Methodology -- 4.1 BoM Data Scenario Analysis -- 4.2 Analysis of Time Series Data Scenario -- 4.3 Unstructured Data Scenario Analysis -- 5 Synthetic Data Generation -- 6 Workload Characterization Experiments -- 6.1 Performance Tests on BoM Database Systems -- 6.2 Performance Tests on Time Series Database Systems -- 6.3 Performance Tests on Unstructured Database Systems -- 7 Conclusion -- References -- Modelling and Prediction -- Power Characterization of Memory Intensive Applications: Analysis and Implications -- 1 Motivation -- 2 Evolution of Server Energy Efficiency -- 2.1 Metrics of Energy Efficiency and Energy Proportionality -- 2.2 Experiment Setup -- 3 Experiment Results and Observations -- 3.1 Results of SPECpower Workload -- 3.2 Results of STREAM Workload -- 3.3 Insights on Energy Efficiency of Memory Intensive Applications -- 4 Related Work -- 5 Conclusions -- References -- Multi-USVs Coordinated Detection in Marine Environment with Deep Reinforcement Learning -- 1 Introduction | |
505 | 8 | |a 2 Background -- 2.1 USV Overview -- 2.2 Reinforcement Learning -- 3 Approach -- 3.1 Single-USV RL -- 3.2 Multi-USVs Coordinated Detection -- 4 Results and Discussion -- 5 Conclusion -- References -- EC-Bench: Benchmarking Onload and Offload Erasure Coders on Modern Hardware Architectures -- 1 Introduction -- 2 Background -- 2.1 Erasure Coding -- 2.2 Onload and Offload Erasure Coders -- 3 EC-Bench Design -- 3.1 Design -- 3.2 Parameter Space -- 3.3 Metrics -- 4 Evaluation -- 4.1 Open Source Libraries -- 4.2 Experimental Setup -- 4.3 Experimental Results -- 5 Related Work -- 6 Conclusion -- References -- Algorithm and Implementations -- Benchmarking SpMV Methods on Many-Core Platforms -- 1 Introduction -- 2 Benchmarking Methodology -- 2.1 Selected SpMV Methods -- 2.2 Selected Features -- 2.3 Hardware Configuration -- 2.4 Data Sets -- 2.5 Experimental Methods -- 3 Experimental Results -- 3.1 SpMV Performance -- 3.2 Best-Method Analysis -- 3.3 Correlation Analysis of Performance and Sparse Pattern -- 4 Conclusion -- References -- Benchmarking Parallel K-Means Cloud Type Clustering from Satellite Data -- 1 Introduction -- 2 Background -- 2.1 Cloud Joint Histograms -- 2.2 K-means Clustering -- 3 Implementation Details -- 3.1 OpenMP Based Implementation -- 3.2 OpenMP and MPI Based Implementation -- 3.3 Spark Based Implementation -- 4 Results -- 4.1 Code Validity -- 4.2 Performance -- 4.3 Cross Comparison -- 5 Related Work -- 6 Conclusions -- References -- Correction to: MiDBench: Multimodel Industrial Big Data Benchmark -- Correction to: Chapter "MiDBench: Multimodel Industrial Big Data Benchmark" in: C. Zheng and J. Zhan (Eds.): Benchmarking, Measuring, and Optimizing, LNCS 11459, https://doi.org/10.1007/978-3-030-32813-9_15 -- Author Index | |
650 | 4 | |a Benchmarking (Management) | |
650 | 0 | 7 | |a Informationstechnik |0 (DE-588)4026926-7 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Big Data |0 (DE-588)4802620-7 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Cloud Computing |0 (DE-588)7623494-0 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Benchmarking |0 (DE-588)4329573-3 |2 gnd |9 rswk-swf |
655 | 7 | |0 (DE-588)1071861417 |a Konferenzschrift |y 2018 |z Seattle, Wash. |2 gnd-content | |
689 | 0 | 0 | |a Informationstechnik |0 (DE-588)4026926-7 |D s |
689 | 0 | 1 | |a Benchmarking |0 (DE-588)4329573-3 |D s |
689 | 0 | 2 | |a Cloud Computing |0 (DE-588)7623494-0 |D s |
689 | 0 | 3 | |a Big Data |0 (DE-588)4802620-7 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Zhan, Jianfeng |e Sonstige |4 oth | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |a Zheng, Chen |t Benchmarking, Measuring, and Optimizing |d Cham : Springer International Publishing AG,c2019 |z 9783030328122 |
912 | |a ZDB-30-PQE | ||
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-033076617 | |
966 | e | |u https://ebookcentral.proquest.com/lib/hwr/detail.action?docID=5946128 |l DE-2070s |p ZDB-30-PQE |q HWR_PDA_PQE |x Aggregator |3 Volltext |
Datensatz im Suchindex
_version_ | 1820875810307833856 |
---|---|
adam_text | |
adam_txt | |
any_adam_object | |
any_adam_object_boolean | |
author | Zheng, Chen |
author_facet | Zheng, Chen |
author_role | aut |
author_sort | Zheng, Chen |
author_variant | c z cz |
building | Verbundindex |
bvnumber | BV047692623 |
classification_rvk | SS 4800 |
collection | ZDB-30-PQE |
contents | Intro -- BenchCouncil: Benchmarking and Promoting Innovative Techniques -- Organization -- Contents -- AI Benchmarking -- AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking -- 1 Introduction -- 2 Related Work -- 3 Datacenter AI Benchmark Suite-AIBench -- 3.1 Datacenter AI Micro Benchmarks -- 3.2 Datacenter AI Component Benchmarks -- 3.3 Application Benchmarks -- 3.4 AI Competition -- 4 Conclusion -- References -- HPC AI500: A Benchmark Suite for HPC AI Systems -- 1 Introduction -- 2 Deep Learning in Scientific Computing -- 2.1 Extreme Weather Analysis -- 2.2 High Energy Physics -- 2.3 Cosmology -- 2.4 Summary -- 3 Benchmarking Methodology and Decisions -- 3.1 Methodology -- 3.2 The Selected Datasets -- 3.3 The Selected Workloads -- 3.4 Metrics -- 4 Reference Implementation -- 4.1 Component Benchmarks -- 4.2 Micro Benchmarks -- 5 Conclusion -- References -- Edge AIBench: Towards Comprehensive End-to-End Edge Computing Benchmarking -- 1 Introduction -- 2 Related Work -- 3 The Summary of Edge AIBench -- 3.1 ICU Patient Monitor -- 3.2 Surveillance Camera -- 3.3 Smart Home -- 3.4 Autonomous Vehicle -- 3.5 A Federated Learning Framework Testbed -- 4 Conclusion -- References -- AIoT Bench: Towards Comprehensive Benchmarking Mobile and Embedded Device Intelligence -- 1 Introduction -- 2 Benchmarking Requirements -- 3 AIoT Bench -- 4 Related Work -- 5 Conclusion -- References -- A Survey on Deep Learning Benchmarks: Do We Still Need New Ones? -- 1 Introduction -- 2 A Survey on Deep Learning Benchmarks -- 2.1 Stanford DAWNBench -- 2.2 Baidu DeepBench -- 2.3 Facebook AI Performance Evaluation Platform -- 2.4 ICT BigDataBench -- 2.5 Other Benchmarks -- 3 Discussion -- 3.1 Benchmark Comparison -- 3.2 Observations -- 4 Conclusion and Future Work -- References -- Cloud -- Benchmarking VM Startup Time in the Cloud -- 1 Introduction 2 Related Works -- 3 Methodology -- 3.1 Environment Setup -- 3.2 Algorithm -- 4 Result -- 4.1 By Instance Type -- 4.2 By Time of the Day -- 4.3 By Instance Location -- 4.4 By Cluster -- 5 Conclusions and Future Work -- References -- An Open Source Cloud-Based NoSQL and NewSQL Database Benchmarking Platform for IoT Data -- Abstract -- 1 Introduction -- 2 Background -- 2.1 NewSQL -- 2.2 NoSQL -- 2.3 MongoDB -- 2.4 VoltDB -- 2.5 Apache Kafka -- 2.6 Cloud Computing -- 3 Benchmarking Framework -- 3.1 Framework Components -- 3.2 Architecture -- 3.3 Data Generation and Consumption Algorithms -- 4 Experiments -- 4.1 System Configuration, Sensor Data Structure, and Formats -- 4.2 Experiment I: Data Injection with Different Volume and Velocity -- 4.3 Experiment II: Transactional Data Processing on High Volume of Data -- 4.4 Experiment III: Analytical Data Processing on High Volume of Data -- 4.5 Findings from Experiments -- 5 Related Work -- 6 Conclusion -- Acknowledgment -- References -- Scalability Evaluation of Big Data Processing Services in Clouds -- 1 Introduction -- 2 Related Work -- 2.1 Big Data Benchmarks -- 2.2 Scalability Evaluation of Big Data Processing Systems -- 3 Evaluation Model -- 4 Experiments -- 4.1 Experiment Environment -- 4.2 Scale-Out Analysis -- 4.3 Scale-Up Experiment -- 4.4 Experimental Results -- 5 Conclusion -- References -- PAIE: A Personal Activity Intelligence Estimator in the Cloud -- 1 Introduction -- 2 Related Work -- 3 Overview of PAIE -- 4 Statistic Issues -- 4.1 PAI Computing Mechanism -- 4.2 Statistical Modeling -- 4.3 PAI Estimating and Error Bounding -- 5 Implementing over Storm -- 6 Performance Evaluation -- 6.1 Experiment Methodology -- 6.2 Performance Analysis -- 6.3 Scalability Evaluation -- 7 Conclusions -- References -- DCMIX: Generating Mixed Workloads for the Cloud Data Center -- 1 Introduction -- 2 Related Work 3 DCMIX -- 3.1 Workloads -- 3.2 Mixed Workload Generator -- 4 System Entropy -- 5 Experiment and Experimental Analysis -- 5.1 Experimental Configurations and Methodology -- 5.2 Experiment Results and Observations -- 5.3 Summary -- 6 Conclusion -- References -- Machine-Learning Based Spark and Hadoop Workload Classification Using Container Performance Patterns -- 1 Introduction -- 1.1 Background -- 1.2 Problem -- 1.3 Limitations of Previous Approaches -- 1.4 Our Contribution -- 1.5 Resource Managers and Containers -- 2 Evaluation Methodology -- 2.1 Container Performance Metrics -- 2.2 Workloads and Workload Transitions -- 2.3 Parameter Settings -- 2.4 Hardware and Software -- 3 Results -- 3.1 Steady State Workload Characteristics -- 3.2 Dynamic Workload Characteristics - Workload Transitions -- 4 Identifying and Classifying Workloads -- 5 Detecting Workload Transitions -- 6 Relative Value and Importance of Container Performance Metrics -- 7 Conclusion -- References -- Testing Raft-Replicated Database Systems -- 1 Introduction -- 2 Background -- 2.1 Replicated State Machines -- 2.2 Raft Overview -- 3 System Model -- 4 Evaluation Metrics -- 4.1 Correctness -- 4.2 Performance -- 4.3 Scalability -- 5 Test Dimensions -- 5.1 Fault Type -- 5.2 Data Operation Type -- 5.3 System Configuration -- 6 Experiments -- 6.1 Experimental Setups -- 6.2 Recovery Time -- 6.3 Throughput and Latency -- 6.4 Stability -- 7 Related Work -- 8 Conclusion -- References -- Big Data -- Benchmarking for Transaction Processing Database Systems in Big Data Era -- 1 Introuduction -- 2 Requirements and Challenges -- 2.1 Data Generation -- 2.2 Workload Generation -- 2.3 Measurement Definition -- 2.4 Others -- 3 PeakBench: Benchmarking Transaction Processing Database Systems on Intensive Workloads -- 3.1 Business Description -- 3.2 Implementation of Benchmark Tool -- 3.3 Workloads 4 Test of PeakBench -- 5 Related Work -- 6 Conclusion -- References -- UMDISW: A Universal Multi-Domain Intelligent Scientific Workflow Framework for the Whole Life Cycle of Scientific Data -- 1 Introduction -- 2 The Status of UMDISW in System Architecture -- 3 The Model of UMDISW -- 3.1 Workflow and Task -- 3.2 Data Flow and Information Flow -- 3.3 Data Node and Algorithm Node -- 3.4 Example -- 4 The Structure and Execution of UMDISW -- 4.1 Running Service Layer -- 4.2 Workflow Execution Layer -- 4.3 Data Resource Layer -- 5 The Application Scenario of UMDISW -- 5.1 Fully Automated Workflow -- 5.2 Semi-custom Workflow -- 5.3 Fully Custom Workflow -- 6 The Implementation of UMDISW -- 7 Conclusion -- References -- MiDBench: Multimodel Industrial Big Data Benchmark -- 1 Introduction -- 2 Big Data Benchmarking Requirements -- 3 Related Work -- 4 Our Benchmarking Methodology -- 4.1 BoM Data Scenario Analysis -- 4.2 Analysis of Time Series Data Scenario -- 4.3 Unstructured Data Scenario Analysis -- 5 Synthetic Data Generation -- 6 Workload Characterization Experiments -- 6.1 Performance Tests on BoM Database Systems -- 6.2 Performance Tests on Time Series Database Systems -- 6.3 Performance Tests on Unstructured Database Systems -- 7 Conclusion -- References -- Modelling and Prediction -- Power Characterization of Memory Intensive Applications: Analysis and Implications -- 1 Motivation -- 2 Evolution of Server Energy Efficiency -- 2.1 Metrics of Energy Efficiency and Energy Proportionality -- 2.2 Experiment Setup -- 3 Experiment Results and Observations -- 3.1 Results of SPECpower Workload -- 3.2 Results of STREAM Workload -- 3.3 Insights on Energy Efficiency of Memory Intensive Applications -- 4 Related Work -- 5 Conclusions -- References -- Multi-USVs Coordinated Detection in Marine Environment with Deep Reinforcement Learning -- 1 Introduction 2 Background -- 2.1 USV Overview -- 2.2 Reinforcement Learning -- 3 Approach -- 3.1 Single-USV RL -- 3.2 Multi-USVs Coordinated Detection -- 4 Results and Discussion -- 5 Conclusion -- References -- EC-Bench: Benchmarking Onload and Offload Erasure Coders on Modern Hardware Architectures -- 1 Introduction -- 2 Background -- 2.1 Erasure Coding -- 2.2 Onload and Offload Erasure Coders -- 3 EC-Bench Design -- 3.1 Design -- 3.2 Parameter Space -- 3.3 Metrics -- 4 Evaluation -- 4.1 Open Source Libraries -- 4.2 Experimental Setup -- 4.3 Experimental Results -- 5 Related Work -- 6 Conclusion -- References -- Algorithm and Implementations -- Benchmarking SpMV Methods on Many-Core Platforms -- 1 Introduction -- 2 Benchmarking Methodology -- 2.1 Selected SpMV Methods -- 2.2 Selected Features -- 2.3 Hardware Configuration -- 2.4 Data Sets -- 2.5 Experimental Methods -- 3 Experimental Results -- 3.1 SpMV Performance -- 3.2 Best-Method Analysis -- 3.3 Correlation Analysis of Performance and Sparse Pattern -- 4 Conclusion -- References -- Benchmarking Parallel K-Means Cloud Type Clustering from Satellite Data -- 1 Introduction -- 2 Background -- 2.1 Cloud Joint Histograms -- 2.2 K-means Clustering -- 3 Implementation Details -- 3.1 OpenMP Based Implementation -- 3.2 OpenMP and MPI Based Implementation -- 3.3 Spark Based Implementation -- 4 Results -- 4.1 Code Validity -- 4.2 Performance -- 4.3 Cross Comparison -- 5 Related Work -- 6 Conclusions -- References -- Correction to: MiDBench: Multimodel Industrial Big Data Benchmark -- Correction to: Chapter "MiDBench: Multimodel Industrial Big Data Benchmark" in: C. Zheng and J. Zhan (Eds.): Benchmarking, Measuring, and Optimizing, LNCS 11459, https://doi.org/10.1007/978-3-030-32813-9_15 -- Author Index |
ctrlnum | (ZDB-30-PQE)EBC5946128 (ZDB-30-PAD)EBC5946128 (ZDB-89-EBL)EBL5946128 (OCoLC)1125107641 (DE-599)BVBBV047692623 |
dewey-full | 690.80683999999997 |
dewey-hundreds | 600 - Technology (Applied sciences) |
dewey-ones | 690 - Construction of buildings |
dewey-raw | 690.80683999999997 |
dewey-search | 690.80683999999997 |
dewey-sort | 3690.80683999999997 |
dewey-tens | 690 - Construction of buildings |
discipline | Bauingenieurwesen |
discipline_str_mv | Informatik Bauingenieurwesen |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nam a2200000zcb4500</leader><controlfield tag="001">BV047692623</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">220119s2019 xx o|||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783030328139</subfield><subfield code="9">978-3-030-32813-9</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-30-PQE)EBC5946128</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-30-PAD)EBC5946128</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-89-EBL)EBL5946128</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1125107641</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV047692623</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-2070s</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">690.80683999999997</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Zheng, Chen</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Benchmarking, Measuring, and Optimizing</subfield><subfield code="b">First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham</subfield><subfield code="b">Springer International Publishing AG</subfield><subfield code="c">2019</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©2019</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 online resource (268 pages)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Lecture Notes in Computer Science Ser.</subfield><subfield code="v">v.11459</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Intro -- BenchCouncil: Benchmarking and Promoting Innovative Techniques -- Organization -- Contents -- AI Benchmarking -- AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking -- 1 Introduction -- 2 Related Work -- 3 Datacenter AI Benchmark Suite-AIBench -- 3.1 Datacenter AI Micro Benchmarks -- 3.2 Datacenter AI Component Benchmarks -- 3.3 Application Benchmarks -- 3.4 AI Competition -- 4 Conclusion -- References -- HPC AI500: A Benchmark Suite for HPC AI Systems -- 1 Introduction -- 2 Deep Learning in Scientific Computing -- 2.1 Extreme Weather Analysis -- 2.2 High Energy Physics -- 2.3 Cosmology -- 2.4 Summary -- 3 Benchmarking Methodology and Decisions -- 3.1 Methodology -- 3.2 The Selected Datasets -- 3.3 The Selected Workloads -- 3.4 Metrics -- 4 Reference Implementation -- 4.1 Component Benchmarks -- 4.2 Micro Benchmarks -- 5 Conclusion -- References -- Edge AIBench: Towards Comprehensive End-to-End Edge Computing Benchmarking -- 1 Introduction -- 2 Related Work -- 3 The Summary of Edge AIBench -- 3.1 ICU Patient Monitor -- 3.2 Surveillance Camera -- 3.3 Smart Home -- 3.4 Autonomous Vehicle -- 3.5 A Federated Learning Framework Testbed -- 4 Conclusion -- References -- AIoT Bench: Towards Comprehensive Benchmarking Mobile and Embedded Device Intelligence -- 1 Introduction -- 2 Benchmarking Requirements -- 3 AIoT Bench -- 4 Related Work -- 5 Conclusion -- References -- A Survey on Deep Learning Benchmarks: Do We Still Need New Ones? -- 1 Introduction -- 2 A Survey on Deep Learning Benchmarks -- 2.1 Stanford DAWNBench -- 2.2 Baidu DeepBench -- 2.3 Facebook AI Performance Evaluation Platform -- 2.4 ICT BigDataBench -- 2.5 Other Benchmarks -- 3 Discussion -- 3.1 Benchmark Comparison -- 3.2 Observations -- 4 Conclusion and Future Work -- References -- Cloud -- Benchmarking VM Startup Time in the Cloud -- 1 Introduction</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">2 Related Works -- 3 Methodology -- 3.1 Environment Setup -- 3.2 Algorithm -- 4 Result -- 4.1 By Instance Type -- 4.2 By Time of the Day -- 4.3 By Instance Location -- 4.4 By Cluster -- 5 Conclusions and Future Work -- References -- An Open Source Cloud-Based NoSQL and NewSQL Database Benchmarking Platform for IoT Data -- Abstract -- 1 Introduction -- 2 Background -- 2.1 NewSQL -- 2.2 NoSQL -- 2.3 MongoDB -- 2.4 VoltDB -- 2.5 Apache Kafka -- 2.6 Cloud Computing -- 3 Benchmarking Framework -- 3.1 Framework Components -- 3.2 Architecture -- 3.3 Data Generation and Consumption Algorithms -- 4 Experiments -- 4.1 System Configuration, Sensor Data Structure, and Formats -- 4.2 Experiment I: Data Injection with Different Volume and Velocity -- 4.3 Experiment II: Transactional Data Processing on High Volume of Data -- 4.4 Experiment III: Analytical Data Processing on High Volume of Data -- 4.5 Findings from Experiments -- 5 Related Work -- 6 Conclusion -- Acknowledgment -- References -- Scalability Evaluation of Big Data Processing Services in Clouds -- 1 Introduction -- 2 Related Work -- 2.1 Big Data Benchmarks -- 2.2 Scalability Evaluation of Big Data Processing Systems -- 3 Evaluation Model -- 4 Experiments -- 4.1 Experiment Environment -- 4.2 Scale-Out Analysis -- 4.3 Scale-Up Experiment -- 4.4 Experimental Results -- 5 Conclusion -- References -- PAIE: A Personal Activity Intelligence Estimator in the Cloud -- 1 Introduction -- 2 Related Work -- 3 Overview of PAIE -- 4 Statistic Issues -- 4.1 PAI Computing Mechanism -- 4.2 Statistical Modeling -- 4.3 PAI Estimating and Error Bounding -- 5 Implementing over Storm -- 6 Performance Evaluation -- 6.1 Experiment Methodology -- 6.2 Performance Analysis -- 6.3 Scalability Evaluation -- 7 Conclusions -- References -- DCMIX: Generating Mixed Workloads for the Cloud Data Center -- 1 Introduction -- 2 Related Work</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">3 DCMIX -- 3.1 Workloads -- 3.2 Mixed Workload Generator -- 4 System Entropy -- 5 Experiment and Experimental Analysis -- 5.1 Experimental Configurations and Methodology -- 5.2 Experiment Results and Observations -- 5.3 Summary -- 6 Conclusion -- References -- Machine-Learning Based Spark and Hadoop Workload Classification Using Container Performance Patterns -- 1 Introduction -- 1.1 Background -- 1.2 Problem -- 1.3 Limitations of Previous Approaches -- 1.4 Our Contribution -- 1.5 Resource Managers and Containers -- 2 Evaluation Methodology -- 2.1 Container Performance Metrics -- 2.2 Workloads and Workload Transitions -- 2.3 Parameter Settings -- 2.4 Hardware and Software -- 3 Results -- 3.1 Steady State Workload Characteristics -- 3.2 Dynamic Workload Characteristics - Workload Transitions -- 4 Identifying and Classifying Workloads -- 5 Detecting Workload Transitions -- 6 Relative Value and Importance of Container Performance Metrics -- 7 Conclusion -- References -- Testing Raft-Replicated Database Systems -- 1 Introduction -- 2 Background -- 2.1 Replicated State Machines -- 2.2 Raft Overview -- 3 System Model -- 4 Evaluation Metrics -- 4.1 Correctness -- 4.2 Performance -- 4.3 Scalability -- 5 Test Dimensions -- 5.1 Fault Type -- 5.2 Data Operation Type -- 5.3 System Configuration -- 6 Experiments -- 6.1 Experimental Setups -- 6.2 Recovery Time -- 6.3 Throughput and Latency -- 6.4 Stability -- 7 Related Work -- 8 Conclusion -- References -- Big Data -- Benchmarking for Transaction Processing Database Systems in Big Data Era -- 1 Introuduction -- 2 Requirements and Challenges -- 2.1 Data Generation -- 2.2 Workload Generation -- 2.3 Measurement Definition -- 2.4 Others -- 3 PeakBench: Benchmarking Transaction Processing Database Systems on Intensive Workloads -- 3.1 Business Description -- 3.2 Implementation of Benchmark Tool -- 3.3 Workloads</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">4 Test of PeakBench -- 5 Related Work -- 6 Conclusion -- References -- UMDISW: A Universal Multi-Domain Intelligent Scientific Workflow Framework for the Whole Life Cycle of Scientific Data -- 1 Introduction -- 2 The Status of UMDISW in System Architecture -- 3 The Model of UMDISW -- 3.1 Workflow and Task -- 3.2 Data Flow and Information Flow -- 3.3 Data Node and Algorithm Node -- 3.4 Example -- 4 The Structure and Execution of UMDISW -- 4.1 Running Service Layer -- 4.2 Workflow Execution Layer -- 4.3 Data Resource Layer -- 5 The Application Scenario of UMDISW -- 5.1 Fully Automated Workflow -- 5.2 Semi-custom Workflow -- 5.3 Fully Custom Workflow -- 6 The Implementation of UMDISW -- 7 Conclusion -- References -- MiDBench: Multimodel Industrial Big Data Benchmark -- 1 Introduction -- 2 Big Data Benchmarking Requirements -- 3 Related Work -- 4 Our Benchmarking Methodology -- 4.1 BoM Data Scenario Analysis -- 4.2 Analysis of Time Series Data Scenario -- 4.3 Unstructured Data Scenario Analysis -- 5 Synthetic Data Generation -- 6 Workload Characterization Experiments -- 6.1 Performance Tests on BoM Database Systems -- 6.2 Performance Tests on Time Series Database Systems -- 6.3 Performance Tests on Unstructured Database Systems -- 7 Conclusion -- References -- Modelling and Prediction -- Power Characterization of Memory Intensive Applications: Analysis and Implications -- 1 Motivation -- 2 Evolution of Server Energy Efficiency -- 2.1 Metrics of Energy Efficiency and Energy Proportionality -- 2.2 Experiment Setup -- 3 Experiment Results and Observations -- 3.1 Results of SPECpower Workload -- 3.2 Results of STREAM Workload -- 3.3 Insights on Energy Efficiency of Memory Intensive Applications -- 4 Related Work -- 5 Conclusions -- References -- Multi-USVs Coordinated Detection in Marine Environment with Deep Reinforcement Learning -- 1 Introduction</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">2 Background -- 2.1 USV Overview -- 2.2 Reinforcement Learning -- 3 Approach -- 3.1 Single-USV RL -- 3.2 Multi-USVs Coordinated Detection -- 4 Results and Discussion -- 5 Conclusion -- References -- EC-Bench: Benchmarking Onload and Offload Erasure Coders on Modern Hardware Architectures -- 1 Introduction -- 2 Background -- 2.1 Erasure Coding -- 2.2 Onload and Offload Erasure Coders -- 3 EC-Bench Design -- 3.1 Design -- 3.2 Parameter Space -- 3.3 Metrics -- 4 Evaluation -- 4.1 Open Source Libraries -- 4.2 Experimental Setup -- 4.3 Experimental Results -- 5 Related Work -- 6 Conclusion -- References -- Algorithm and Implementations -- Benchmarking SpMV Methods on Many-Core Platforms -- 1 Introduction -- 2 Benchmarking Methodology -- 2.1 Selected SpMV Methods -- 2.2 Selected Features -- 2.3 Hardware Configuration -- 2.4 Data Sets -- 2.5 Experimental Methods -- 3 Experimental Results -- 3.1 SpMV Performance -- 3.2 Best-Method Analysis -- 3.3 Correlation Analysis of Performance and Sparse Pattern -- 4 Conclusion -- References -- Benchmarking Parallel K-Means Cloud Type Clustering from Satellite Data -- 1 Introduction -- 2 Background -- 2.1 Cloud Joint Histograms -- 2.2 K-means Clustering -- 3 Implementation Details -- 3.1 OpenMP Based Implementation -- 3.2 OpenMP and MPI Based Implementation -- 3.3 Spark Based Implementation -- 4 Results -- 4.1 Code Validity -- 4.2 Performance -- 4.3 Cross Comparison -- 5 Related Work -- 6 Conclusions -- References -- Correction to: MiDBench: Multimodel Industrial Big Data Benchmark -- Correction to: Chapter "MiDBench: Multimodel Industrial Big Data Benchmark" in: C. Zheng and J. Zhan (Eds.): Benchmarking, Measuring, and Optimizing, LNCS 11459, https://doi.org/10.1007/978-3-030-32813-9_15 -- Author Index</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Benchmarking (Management)</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Informationstechnik</subfield><subfield code="0">(DE-588)4026926-7</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Big Data</subfield><subfield code="0">(DE-588)4802620-7</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Cloud Computing</subfield><subfield code="0">(DE-588)7623494-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Benchmarking</subfield><subfield code="0">(DE-588)4329573-3</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="0">(DE-588)1071861417</subfield><subfield code="a">Konferenzschrift</subfield><subfield code="y">2018</subfield><subfield code="z">Seattle, Wash.</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Informationstechnik</subfield><subfield code="0">(DE-588)4026926-7</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Benchmarking</subfield><subfield code="0">(DE-588)4329573-3</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Cloud Computing</subfield><subfield code="0">(DE-588)7623494-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="3"><subfield code="a">Big Data</subfield><subfield code="0">(DE-588)4802620-7</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhan, Jianfeng</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="a">Zheng, Chen</subfield><subfield code="t">Benchmarking, Measuring, and Optimizing</subfield><subfield code="d">Cham : Springer International Publishing AG,c2019</subfield><subfield code="z">9783030328122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-PQE</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-033076617</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://ebookcentral.proquest.com/lib/hwr/detail.action?docID=5946128</subfield><subfield code="l">DE-2070s</subfield><subfield code="p">ZDB-30-PQE</subfield><subfield code="q">HWR_PDA_PQE</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
genre | (DE-588)1071861417 Konferenzschrift 2018 Seattle, Wash. gnd-content |
genre_facet | Konferenzschrift 2018 Seattle, Wash. |
id | DE-604.BV047692623 |
illustrated | Not Illustrated |
index_date | 2024-07-03T18:57:25Z |
indexdate | 2025-01-10T15:21:33Z |
institution | BVB |
isbn | 9783030328139 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-033076617 |
oclc_num | 1125107641 |
open_access_boolean | |
owner | DE-2070s |
owner_facet | DE-2070s |
physical | 1 online resource (268 pages) |
psigel | ZDB-30-PQE ZDB-30-PQE HWR_PDA_PQE |
publishDate | 2019 |
publishDateSearch | 2019 |
publishDateSort | 2019 |
publisher | Springer International Publishing AG |
record_format | marc |
series2 | Lecture Notes in Computer Science Ser. |
spelling | Zheng, Chen Verfasser aut Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers Cham Springer International Publishing AG 2019 ©2019 1 online resource (268 pages) txt rdacontent c rdamedia cr rdacarrier Lecture Notes in Computer Science Ser. v.11459 Description based on publisher supplied metadata and other sources Intro -- BenchCouncil: Benchmarking and Promoting Innovative Techniques -- Organization -- Contents -- AI Benchmarking -- AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking -- 1 Introduction -- 2 Related Work -- 3 Datacenter AI Benchmark Suite-AIBench -- 3.1 Datacenter AI Micro Benchmarks -- 3.2 Datacenter AI Component Benchmarks -- 3.3 Application Benchmarks -- 3.4 AI Competition -- 4 Conclusion -- References -- HPC AI500: A Benchmark Suite for HPC AI Systems -- 1 Introduction -- 2 Deep Learning in Scientific Computing -- 2.1 Extreme Weather Analysis -- 2.2 High Energy Physics -- 2.3 Cosmology -- 2.4 Summary -- 3 Benchmarking Methodology and Decisions -- 3.1 Methodology -- 3.2 The Selected Datasets -- 3.3 The Selected Workloads -- 3.4 Metrics -- 4 Reference Implementation -- 4.1 Component Benchmarks -- 4.2 Micro Benchmarks -- 5 Conclusion -- References -- Edge AIBench: Towards Comprehensive End-to-End Edge Computing Benchmarking -- 1 Introduction -- 2 Related Work -- 3 The Summary of Edge AIBench -- 3.1 ICU Patient Monitor -- 3.2 Surveillance Camera -- 3.3 Smart Home -- 3.4 Autonomous Vehicle -- 3.5 A Federated Learning Framework Testbed -- 4 Conclusion -- References -- AIoT Bench: Towards Comprehensive Benchmarking Mobile and Embedded Device Intelligence -- 1 Introduction -- 2 Benchmarking Requirements -- 3 AIoT Bench -- 4 Related Work -- 5 Conclusion -- References -- A Survey on Deep Learning Benchmarks: Do We Still Need New Ones? -- 1 Introduction -- 2 A Survey on Deep Learning Benchmarks -- 2.1 Stanford DAWNBench -- 2.2 Baidu DeepBench -- 2.3 Facebook AI Performance Evaluation Platform -- 2.4 ICT BigDataBench -- 2.5 Other Benchmarks -- 3 Discussion -- 3.1 Benchmark Comparison -- 3.2 Observations -- 4 Conclusion and Future Work -- References -- Cloud -- Benchmarking VM Startup Time in the Cloud -- 1 Introduction 2 Related Works -- 3 Methodology -- 3.1 Environment Setup -- 3.2 Algorithm -- 4 Result -- 4.1 By Instance Type -- 4.2 By Time of the Day -- 4.3 By Instance Location -- 4.4 By Cluster -- 5 Conclusions and Future Work -- References -- An Open Source Cloud-Based NoSQL and NewSQL Database Benchmarking Platform for IoT Data -- Abstract -- 1 Introduction -- 2 Background -- 2.1 NewSQL -- 2.2 NoSQL -- 2.3 MongoDB -- 2.4 VoltDB -- 2.5 Apache Kafka -- 2.6 Cloud Computing -- 3 Benchmarking Framework -- 3.1 Framework Components -- 3.2 Architecture -- 3.3 Data Generation and Consumption Algorithms -- 4 Experiments -- 4.1 System Configuration, Sensor Data Structure, and Formats -- 4.2 Experiment I: Data Injection with Different Volume and Velocity -- 4.3 Experiment II: Transactional Data Processing on High Volume of Data -- 4.4 Experiment III: Analytical Data Processing on High Volume of Data -- 4.5 Findings from Experiments -- 5 Related Work -- 6 Conclusion -- Acknowledgment -- References -- Scalability Evaluation of Big Data Processing Services in Clouds -- 1 Introduction -- 2 Related Work -- 2.1 Big Data Benchmarks -- 2.2 Scalability Evaluation of Big Data Processing Systems -- 3 Evaluation Model -- 4 Experiments -- 4.1 Experiment Environment -- 4.2 Scale-Out Analysis -- 4.3 Scale-Up Experiment -- 4.4 Experimental Results -- 5 Conclusion -- References -- PAIE: A Personal Activity Intelligence Estimator in the Cloud -- 1 Introduction -- 2 Related Work -- 3 Overview of PAIE -- 4 Statistic Issues -- 4.1 PAI Computing Mechanism -- 4.2 Statistical Modeling -- 4.3 PAI Estimating and Error Bounding -- 5 Implementing over Storm -- 6 Performance Evaluation -- 6.1 Experiment Methodology -- 6.2 Performance Analysis -- 6.3 Scalability Evaluation -- 7 Conclusions -- References -- DCMIX: Generating Mixed Workloads for the Cloud Data Center -- 1 Introduction -- 2 Related Work 3 DCMIX -- 3.1 Workloads -- 3.2 Mixed Workload Generator -- 4 System Entropy -- 5 Experiment and Experimental Analysis -- 5.1 Experimental Configurations and Methodology -- 5.2 Experiment Results and Observations -- 5.3 Summary -- 6 Conclusion -- References -- Machine-Learning Based Spark and Hadoop Workload Classification Using Container Performance Patterns -- 1 Introduction -- 1.1 Background -- 1.2 Problem -- 1.3 Limitations of Previous Approaches -- 1.4 Our Contribution -- 1.5 Resource Managers and Containers -- 2 Evaluation Methodology -- 2.1 Container Performance Metrics -- 2.2 Workloads and Workload Transitions -- 2.3 Parameter Settings -- 2.4 Hardware and Software -- 3 Results -- 3.1 Steady State Workload Characteristics -- 3.2 Dynamic Workload Characteristics - Workload Transitions -- 4 Identifying and Classifying Workloads -- 5 Detecting Workload Transitions -- 6 Relative Value and Importance of Container Performance Metrics -- 7 Conclusion -- References -- Testing Raft-Replicated Database Systems -- 1 Introduction -- 2 Background -- 2.1 Replicated State Machines -- 2.2 Raft Overview -- 3 System Model -- 4 Evaluation Metrics -- 4.1 Correctness -- 4.2 Performance -- 4.3 Scalability -- 5 Test Dimensions -- 5.1 Fault Type -- 5.2 Data Operation Type -- 5.3 System Configuration -- 6 Experiments -- 6.1 Experimental Setups -- 6.2 Recovery Time -- 6.3 Throughput and Latency -- 6.4 Stability -- 7 Related Work -- 8 Conclusion -- References -- Big Data -- Benchmarking for Transaction Processing Database Systems in Big Data Era -- 1 Introuduction -- 2 Requirements and Challenges -- 2.1 Data Generation -- 2.2 Workload Generation -- 2.3 Measurement Definition -- 2.4 Others -- 3 PeakBench: Benchmarking Transaction Processing Database Systems on Intensive Workloads -- 3.1 Business Description -- 3.2 Implementation of Benchmark Tool -- 3.3 Workloads 4 Test of PeakBench -- 5 Related Work -- 6 Conclusion -- References -- UMDISW: A Universal Multi-Domain Intelligent Scientific Workflow Framework for the Whole Life Cycle of Scientific Data -- 1 Introduction -- 2 The Status of UMDISW in System Architecture -- 3 The Model of UMDISW -- 3.1 Workflow and Task -- 3.2 Data Flow and Information Flow -- 3.3 Data Node and Algorithm Node -- 3.4 Example -- 4 The Structure and Execution of UMDISW -- 4.1 Running Service Layer -- 4.2 Workflow Execution Layer -- 4.3 Data Resource Layer -- 5 The Application Scenario of UMDISW -- 5.1 Fully Automated Workflow -- 5.2 Semi-custom Workflow -- 5.3 Fully Custom Workflow -- 6 The Implementation of UMDISW -- 7 Conclusion -- References -- MiDBench: Multimodel Industrial Big Data Benchmark -- 1 Introduction -- 2 Big Data Benchmarking Requirements -- 3 Related Work -- 4 Our Benchmarking Methodology -- 4.1 BoM Data Scenario Analysis -- 4.2 Analysis of Time Series Data Scenario -- 4.3 Unstructured Data Scenario Analysis -- 5 Synthetic Data Generation -- 6 Workload Characterization Experiments -- 6.1 Performance Tests on BoM Database Systems -- 6.2 Performance Tests on Time Series Database Systems -- 6.3 Performance Tests on Unstructured Database Systems -- 7 Conclusion -- References -- Modelling and Prediction -- Power Characterization of Memory Intensive Applications: Analysis and Implications -- 1 Motivation -- 2 Evolution of Server Energy Efficiency -- 2.1 Metrics of Energy Efficiency and Energy Proportionality -- 2.2 Experiment Setup -- 3 Experiment Results and Observations -- 3.1 Results of SPECpower Workload -- 3.2 Results of STREAM Workload -- 3.3 Insights on Energy Efficiency of Memory Intensive Applications -- 4 Related Work -- 5 Conclusions -- References -- Multi-USVs Coordinated Detection in Marine Environment with Deep Reinforcement Learning -- 1 Introduction 2 Background -- 2.1 USV Overview -- 2.2 Reinforcement Learning -- 3 Approach -- 3.1 Single-USV RL -- 3.2 Multi-USVs Coordinated Detection -- 4 Results and Discussion -- 5 Conclusion -- References -- EC-Bench: Benchmarking Onload and Offload Erasure Coders on Modern Hardware Architectures -- 1 Introduction -- 2 Background -- 2.1 Erasure Coding -- 2.2 Onload and Offload Erasure Coders -- 3 EC-Bench Design -- 3.1 Design -- 3.2 Parameter Space -- 3.3 Metrics -- 4 Evaluation -- 4.1 Open Source Libraries -- 4.2 Experimental Setup -- 4.3 Experimental Results -- 5 Related Work -- 6 Conclusion -- References -- Algorithm and Implementations -- Benchmarking SpMV Methods on Many-Core Platforms -- 1 Introduction -- 2 Benchmarking Methodology -- 2.1 Selected SpMV Methods -- 2.2 Selected Features -- 2.3 Hardware Configuration -- 2.4 Data Sets -- 2.5 Experimental Methods -- 3 Experimental Results -- 3.1 SpMV Performance -- 3.2 Best-Method Analysis -- 3.3 Correlation Analysis of Performance and Sparse Pattern -- 4 Conclusion -- References -- Benchmarking Parallel K-Means Cloud Type Clustering from Satellite Data -- 1 Introduction -- 2 Background -- 2.1 Cloud Joint Histograms -- 2.2 K-means Clustering -- 3 Implementation Details -- 3.1 OpenMP Based Implementation -- 3.2 OpenMP and MPI Based Implementation -- 3.3 Spark Based Implementation -- 4 Results -- 4.1 Code Validity -- 4.2 Performance -- 4.3 Cross Comparison -- 5 Related Work -- 6 Conclusions -- References -- Correction to: MiDBench: Multimodel Industrial Big Data Benchmark -- Correction to: Chapter "MiDBench: Multimodel Industrial Big Data Benchmark" in: C. Zheng and J. Zhan (Eds.): Benchmarking, Measuring, and Optimizing, LNCS 11459, https://doi.org/10.1007/978-3-030-32813-9_15 -- Author Index Benchmarking (Management) Informationstechnik (DE-588)4026926-7 gnd rswk-swf Big Data (DE-588)4802620-7 gnd rswk-swf Cloud Computing (DE-588)7623494-0 gnd rswk-swf Benchmarking (DE-588)4329573-3 gnd rswk-swf (DE-588)1071861417 Konferenzschrift 2018 Seattle, Wash. gnd-content Informationstechnik (DE-588)4026926-7 s Benchmarking (DE-588)4329573-3 s Cloud Computing (DE-588)7623494-0 s Big Data (DE-588)4802620-7 s DE-604 Zhan, Jianfeng Sonstige oth Erscheint auch als Druck-Ausgabe Zheng, Chen Benchmarking, Measuring, and Optimizing Cham : Springer International Publishing AG,c2019 9783030328122 |
spellingShingle | Zheng, Chen Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers Intro -- BenchCouncil: Benchmarking and Promoting Innovative Techniques -- Organization -- Contents -- AI Benchmarking -- AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking -- 1 Introduction -- 2 Related Work -- 3 Datacenter AI Benchmark Suite-AIBench -- 3.1 Datacenter AI Micro Benchmarks -- 3.2 Datacenter AI Component Benchmarks -- 3.3 Application Benchmarks -- 3.4 AI Competition -- 4 Conclusion -- References -- HPC AI500: A Benchmark Suite for HPC AI Systems -- 1 Introduction -- 2 Deep Learning in Scientific Computing -- 2.1 Extreme Weather Analysis -- 2.2 High Energy Physics -- 2.3 Cosmology -- 2.4 Summary -- 3 Benchmarking Methodology and Decisions -- 3.1 Methodology -- 3.2 The Selected Datasets -- 3.3 The Selected Workloads -- 3.4 Metrics -- 4 Reference Implementation -- 4.1 Component Benchmarks -- 4.2 Micro Benchmarks -- 5 Conclusion -- References -- Edge AIBench: Towards Comprehensive End-to-End Edge Computing Benchmarking -- 1 Introduction -- 2 Related Work -- 3 The Summary of Edge AIBench -- 3.1 ICU Patient Monitor -- 3.2 Surveillance Camera -- 3.3 Smart Home -- 3.4 Autonomous Vehicle -- 3.5 A Federated Learning Framework Testbed -- 4 Conclusion -- References -- AIoT Bench: Towards Comprehensive Benchmarking Mobile and Embedded Device Intelligence -- 1 Introduction -- 2 Benchmarking Requirements -- 3 AIoT Bench -- 4 Related Work -- 5 Conclusion -- References -- A Survey on Deep Learning Benchmarks: Do We Still Need New Ones? -- 1 Introduction -- 2 A Survey on Deep Learning Benchmarks -- 2.1 Stanford DAWNBench -- 2.2 Baidu DeepBench -- 2.3 Facebook AI Performance Evaluation Platform -- 2.4 ICT BigDataBench -- 2.5 Other Benchmarks -- 3 Discussion -- 3.1 Benchmark Comparison -- 3.2 Observations -- 4 Conclusion and Future Work -- References -- Cloud -- Benchmarking VM Startup Time in the Cloud -- 1 Introduction 2 Related Works -- 3 Methodology -- 3.1 Environment Setup -- 3.2 Algorithm -- 4 Result -- 4.1 By Instance Type -- 4.2 By Time of the Day -- 4.3 By Instance Location -- 4.4 By Cluster -- 5 Conclusions and Future Work -- References -- An Open Source Cloud-Based NoSQL and NewSQL Database Benchmarking Platform for IoT Data -- Abstract -- 1 Introduction -- 2 Background -- 2.1 NewSQL -- 2.2 NoSQL -- 2.3 MongoDB -- 2.4 VoltDB -- 2.5 Apache Kafka -- 2.6 Cloud Computing -- 3 Benchmarking Framework -- 3.1 Framework Components -- 3.2 Architecture -- 3.3 Data Generation and Consumption Algorithms -- 4 Experiments -- 4.1 System Configuration, Sensor Data Structure, and Formats -- 4.2 Experiment I: Data Injection with Different Volume and Velocity -- 4.3 Experiment II: Transactional Data Processing on High Volume of Data -- 4.4 Experiment III: Analytical Data Processing on High Volume of Data -- 4.5 Findings from Experiments -- 5 Related Work -- 6 Conclusion -- Acknowledgment -- References -- Scalability Evaluation of Big Data Processing Services in Clouds -- 1 Introduction -- 2 Related Work -- 2.1 Big Data Benchmarks -- 2.2 Scalability Evaluation of Big Data Processing Systems -- 3 Evaluation Model -- 4 Experiments -- 4.1 Experiment Environment -- 4.2 Scale-Out Analysis -- 4.3 Scale-Up Experiment -- 4.4 Experimental Results -- 5 Conclusion -- References -- PAIE: A Personal Activity Intelligence Estimator in the Cloud -- 1 Introduction -- 2 Related Work -- 3 Overview of PAIE -- 4 Statistic Issues -- 4.1 PAI Computing Mechanism -- 4.2 Statistical Modeling -- 4.3 PAI Estimating and Error Bounding -- 5 Implementing over Storm -- 6 Performance Evaluation -- 6.1 Experiment Methodology -- 6.2 Performance Analysis -- 6.3 Scalability Evaluation -- 7 Conclusions -- References -- DCMIX: Generating Mixed Workloads for the Cloud Data Center -- 1 Introduction -- 2 Related Work 3 DCMIX -- 3.1 Workloads -- 3.2 Mixed Workload Generator -- 4 System Entropy -- 5 Experiment and Experimental Analysis -- 5.1 Experimental Configurations and Methodology -- 5.2 Experiment Results and Observations -- 5.3 Summary -- 6 Conclusion -- References -- Machine-Learning Based Spark and Hadoop Workload Classification Using Container Performance Patterns -- 1 Introduction -- 1.1 Background -- 1.2 Problem -- 1.3 Limitations of Previous Approaches -- 1.4 Our Contribution -- 1.5 Resource Managers and Containers -- 2 Evaluation Methodology -- 2.1 Container Performance Metrics -- 2.2 Workloads and Workload Transitions -- 2.3 Parameter Settings -- 2.4 Hardware and Software -- 3 Results -- 3.1 Steady State Workload Characteristics -- 3.2 Dynamic Workload Characteristics - Workload Transitions -- 4 Identifying and Classifying Workloads -- 5 Detecting Workload Transitions -- 6 Relative Value and Importance of Container Performance Metrics -- 7 Conclusion -- References -- Testing Raft-Replicated Database Systems -- 1 Introduction -- 2 Background -- 2.1 Replicated State Machines -- 2.2 Raft Overview -- 3 System Model -- 4 Evaluation Metrics -- 4.1 Correctness -- 4.2 Performance -- 4.3 Scalability -- 5 Test Dimensions -- 5.1 Fault Type -- 5.2 Data Operation Type -- 5.3 System Configuration -- 6 Experiments -- 6.1 Experimental Setups -- 6.2 Recovery Time -- 6.3 Throughput and Latency -- 6.4 Stability -- 7 Related Work -- 8 Conclusion -- References -- Big Data -- Benchmarking for Transaction Processing Database Systems in Big Data Era -- 1 Introuduction -- 2 Requirements and Challenges -- 2.1 Data Generation -- 2.2 Workload Generation -- 2.3 Measurement Definition -- 2.4 Others -- 3 PeakBench: Benchmarking Transaction Processing Database Systems on Intensive Workloads -- 3.1 Business Description -- 3.2 Implementation of Benchmark Tool -- 3.3 Workloads 4 Test of PeakBench -- 5 Related Work -- 6 Conclusion -- References -- UMDISW: A Universal Multi-Domain Intelligent Scientific Workflow Framework for the Whole Life Cycle of Scientific Data -- 1 Introduction -- 2 The Status of UMDISW in System Architecture -- 3 The Model of UMDISW -- 3.1 Workflow and Task -- 3.2 Data Flow and Information Flow -- 3.3 Data Node and Algorithm Node -- 3.4 Example -- 4 The Structure and Execution of UMDISW -- 4.1 Running Service Layer -- 4.2 Workflow Execution Layer -- 4.3 Data Resource Layer -- 5 The Application Scenario of UMDISW -- 5.1 Fully Automated Workflow -- 5.2 Semi-custom Workflow -- 5.3 Fully Custom Workflow -- 6 The Implementation of UMDISW -- 7 Conclusion -- References -- MiDBench: Multimodel Industrial Big Data Benchmark -- 1 Introduction -- 2 Big Data Benchmarking Requirements -- 3 Related Work -- 4 Our Benchmarking Methodology -- 4.1 BoM Data Scenario Analysis -- 4.2 Analysis of Time Series Data Scenario -- 4.3 Unstructured Data Scenario Analysis -- 5 Synthetic Data Generation -- 6 Workload Characterization Experiments -- 6.1 Performance Tests on BoM Database Systems -- 6.2 Performance Tests on Time Series Database Systems -- 6.3 Performance Tests on Unstructured Database Systems -- 7 Conclusion -- References -- Modelling and Prediction -- Power Characterization of Memory Intensive Applications: Analysis and Implications -- 1 Motivation -- 2 Evolution of Server Energy Efficiency -- 2.1 Metrics of Energy Efficiency and Energy Proportionality -- 2.2 Experiment Setup -- 3 Experiment Results and Observations -- 3.1 Results of SPECpower Workload -- 3.2 Results of STREAM Workload -- 3.3 Insights on Energy Efficiency of Memory Intensive Applications -- 4 Related Work -- 5 Conclusions -- References -- Multi-USVs Coordinated Detection in Marine Environment with Deep Reinforcement Learning -- 1 Introduction 2 Background -- 2.1 USV Overview -- 2.2 Reinforcement Learning -- 3 Approach -- 3.1 Single-USV RL -- 3.2 Multi-USVs Coordinated Detection -- 4 Results and Discussion -- 5 Conclusion -- References -- EC-Bench: Benchmarking Onload and Offload Erasure Coders on Modern Hardware Architectures -- 1 Introduction -- 2 Background -- 2.1 Erasure Coding -- 2.2 Onload and Offload Erasure Coders -- 3 EC-Bench Design -- 3.1 Design -- 3.2 Parameter Space -- 3.3 Metrics -- 4 Evaluation -- 4.1 Open Source Libraries -- 4.2 Experimental Setup -- 4.3 Experimental Results -- 5 Related Work -- 6 Conclusion -- References -- Algorithm and Implementations -- Benchmarking SpMV Methods on Many-Core Platforms -- 1 Introduction -- 2 Benchmarking Methodology -- 2.1 Selected SpMV Methods -- 2.2 Selected Features -- 2.3 Hardware Configuration -- 2.4 Data Sets -- 2.5 Experimental Methods -- 3 Experimental Results -- 3.1 SpMV Performance -- 3.2 Best-Method Analysis -- 3.3 Correlation Analysis of Performance and Sparse Pattern -- 4 Conclusion -- References -- Benchmarking Parallel K-Means Cloud Type Clustering from Satellite Data -- 1 Introduction -- 2 Background -- 2.1 Cloud Joint Histograms -- 2.2 K-means Clustering -- 3 Implementation Details -- 3.1 OpenMP Based Implementation -- 3.2 OpenMP and MPI Based Implementation -- 3.3 Spark Based Implementation -- 4 Results -- 4.1 Code Validity -- 4.2 Performance -- 4.3 Cross Comparison -- 5 Related Work -- 6 Conclusions -- References -- Correction to: MiDBench: Multimodel Industrial Big Data Benchmark -- Correction to: Chapter "MiDBench: Multimodel Industrial Big Data Benchmark" in: C. Zheng and J. Zhan (Eds.): Benchmarking, Measuring, and Optimizing, LNCS 11459, https://doi.org/10.1007/978-3-030-32813-9_15 -- Author Index Benchmarking (Management) Informationstechnik (DE-588)4026926-7 gnd Big Data (DE-588)4802620-7 gnd Cloud Computing (DE-588)7623494-0 gnd Benchmarking (DE-588)4329573-3 gnd |
subject_GND | (DE-588)4026926-7 (DE-588)4802620-7 (DE-588)7623494-0 (DE-588)4329573-3 (DE-588)1071861417 |
title | Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
title_auth | Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
title_exact_search | Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
title_exact_search_txtP | Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
title_full | Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
title_fullStr | Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
title_full_unstemmed | Benchmarking, Measuring, and Optimizing First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
title_short | Benchmarking, Measuring, and Optimizing |
title_sort | benchmarking measuring and optimizing first benchcouncil international symposium bench 2018 seattle wa usa december 10 13 2018 revised selected papers |
title_sub | First BenchCouncil International Symposium, Bench 2018, Seattle, WA, USA, December 10-13, 2018, Revised Selected Papers |
topic | Benchmarking (Management) Informationstechnik (DE-588)4026926-7 gnd Big Data (DE-588)4802620-7 gnd Cloud Computing (DE-588)7623494-0 gnd Benchmarking (DE-588)4329573-3 gnd |
topic_facet | Benchmarking (Management) Informationstechnik Big Data Cloud Computing Benchmarking Konferenzschrift 2018 Seattle, Wash. |
work_keys_str_mv | AT zhengchen benchmarkingmeasuringandoptimizingfirstbenchcouncilinternationalsymposiumbench2018seattlewausadecember10132018revisedselectedpapers AT zhanjianfeng benchmarkingmeasuringandoptimizingfirstbenchcouncilinternationalsymposiumbench2018seattlewausadecember10132018revisedselectedpapers |