Machine Learning and Knowledge Extraction: 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Elektronisch E-Book |
Sprache: | English |
Veröffentlicht: |
Cham
Springer
2023
|
Ausgabe: | 1st ed |
Schriftenreihe: | Lecture Notes in Computer Science Series
v.14065 |
Schlagworte: | |
Online-Zugang: | DE-2070s |
Beschreibung: | Description based on publisher supplied metadata and other sources |
Beschreibung: | 1 Online-Ressource (335 Seiten) |
ISBN: | 9783031408373 |
Internformat
MARC
LEADER | 00000nam a2200000zcb4500 | ||
---|---|---|---|
001 | BV050100559 | ||
003 | DE-604 | ||
007 | cr|uuu---uuuuu | ||
008 | 241218s2023 xx o|||| 00||| eng d | ||
020 | |a 9783031408373 |9 978-3-031-40837-3 | ||
035 | |a (ZDB-30-PQE)EBC30717195 | ||
035 | |a (ZDB-30-PAD)EBC30717195 | ||
035 | |a (ZDB-89-EBL)EBL30717195 | ||
035 | |a (OCoLC)1395507863 | ||
035 | |a (DE-599)BVBBV050100559 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-2070s | ||
082 | 0 | |a 016.34951249 | |
084 | |a SS 4800 |0 (DE-625)143528: |2 rvk | ||
100 | 1 | |a Holzinger, Andreas |e Verfasser |4 aut | |
245 | 1 | 0 | |a Machine Learning and Knowledge Extraction |b 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings |
250 | |a 1st ed | ||
264 | 1 | |a Cham |b Springer |c 2023 | |
264 | 4 | |c ©2023 | |
300 | |a 1 Online-Ressource (335 Seiten) | ||
336 | |b txt |2 rdacontent | ||
337 | |b c |2 rdamedia | ||
338 | |b cr |2 rdacarrier | ||
490 | 0 | |a Lecture Notes in Computer Science Series |v v.14065 | |
500 | |a Description based on publisher supplied metadata and other sources | ||
505 | 8 | |a Intro -- Preface -- Organization -- Contents -- About the Editors -- Controllable AI - An Alternative to Trustworthiness in Complex AI Systems? -- 1 Introduction and Motivation -- 2 Background -- 2.1 The Explainability Problem -- 2.2 Trustworthy AI -- 2.3 The AI Act -- 3 Principles of Controllable AI -- 4 Techniques for Controllable AI -- 4.1 Detecting Control Loss -- 4.2 Managing Control Loss -- 4.3 Support Measures -- 5 Discussion -- 6 Conclusion and Outlook for Future Research -- References -- Efficient Approximation of Asymmetric Shapley Values Using Functional Decomposition -- 1 Introduction -- 1.1 Shapley Values -- 1.2 Asymmetric Shapley Values -- 1.3 PDD-SHAP -- 2 A-PDD-SHAP -- 3 Experiments -- 3.1 Causal Explanations of Unfair Discrimination -- 3.2 Evaluation on Real-World Datasets -- 4 Summary and Outlook -- 5 Conclusion -- A Proof of Theorem 1 -- References -- Domain-Specific Evaluation of Visual Explanations for Application-Grounded Facial Expression Recognition -- 1 Introduction -- 2 Related Work -- 3 Materials and Methods -- 3.1 Evaluation Framework Overview -- 3.2 Step 1: Data Set and Model Selection -- 3.3 Step 2: Model Performance Analysis -- 3.4 Step 3: Visual Classification Explanations -- 3.5 Step 4: Domain-Specific Evaluation Based on Landmarks -- 4 Results and Discussion -- 5 Conclusion -- References -- Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning -- 1 Introduction and Motivation -- 2 Background and Related Work -- 2.1 Explainable AI on Graph Neural Networks -- 2.2 Federated Learning -- 2.3 Knowledge Graphs -- 2.4 Human-in-the-Loop -- 3 Methods, Solutions and Implementations -- 3.1 Disease Subnetwork Detection -- 3.2 Explainability -- 3.3 Knowledge Graph -- 3.4 Federated Ensemble Learning with GNNs -- 3.5 interaCtive expLainable plAtform for gRaph neUral networkS (CLARUS) | |
505 | 8 | |a 4 Lessons Learned -- 5 Conclusion and Future Outlook -- References -- The Tower of Babel in Explainable Artificial Intelligence (XAI) -- 1 Introduction and Motivation -- 2 Ethics Guidelines and XAI -- 3 Law and XAI -- 3.1 GDPR -- 3.2 Digital Services Act (DSA) -- 3.3 The (Proposed) Artificial Intelligence Act (AIA) -- 4 Standardization and XAI -- 5 The Link Between Law and Standardization -- 6 A Proposed Solution -- 7 Conclusion -- References -- Hyper-Stacked: Scalable and Distributed Approach to AutoML for Big Data -- 1 Introduction -- 2 Background and Related Work -- 2.1 Problem Definition -- 2.2 CASH Methods -- 2.3 Ensemble Learning -- 2.4 Meta-learning -- 2.5 Spark -- 3 Hyper-Stacked: A Scalable and Distributed Approach to AutoML for Big Data -- 3.1 Motivation -- 3.2 Hyper-Stacked's Design and Workflow -- 4 Experimental Design -- 4.1 Binary Supervised Learning Problems -- 4.2 Experimental Setups -- 4.3 Experiment Speedup, Sizeup, Scaleup -- 5 Analysis of Results -- 5.1 Speedup -- 5.2 Sizeup -- 5.3 Scaleup -- 6 Conclusions -- References -- Transformers are Short-Text Classifiers -- 1 Introduction -- 2 Related Work -- 2.1 Sequence-Based Models -- 2.2 Graph-Based Models -- 2.3 Short Text Models -- 2.4 Summary -- 3 Selected Models for Our Comparison -- 3.1 Models for Short Text Classification -- 3.2 Top-Performing Models for Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Preprocessing -- 4.3 Procedure -- 4.4 Hyperparameter Optimization -- 4.5 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity -- 6.3 Parameter Count of Models -- 6.4 Generalization -- 7 Conclusion and Future Work -- References -- Reinforcement Learning with Temporal-Logic-Based Causal Diagrams -- 1 Introduction -- 2 Motivating Example -- 3 Related Work -- 4 Preliminaries -- 5 Temporal-Logic-Based Causal Diagrams | |
505 | 8 | |a 6 Reinforcement Learning with Causal Diagrams -- 6.1 Q-Learning with Early Stopping -- 7 Case Studies -- 7.1 Case Study I: Small Office World Domain -- 7.2 Case Study II: Large Office World Domain -- 7.3 Case Study III: Crossroad Domain -- 8 Conclusions and Discussions -- References -- Using Machine Learning to Generate a Dictionary for Environmental Issues -- 1 Introduction -- 1.1 Findings -- 1.2 This Paper -- 2 Background -- 2.1 Word2Vec - CBOW and Skip-Gram -- 2.2 ChatGPT -- 2.3 Business Text - Form 10K and Earning Call Conferences -- 2.4 Role of Human in the Loop -- 3 Bag of Words -- 3.1 Concepts as Ontologies Represented as Bag of Words -- 3.2 Example System - LIWC -- 3.3 Concept of Interest - Carbon Footprint -- 4 Word2Vec - Single Words -- 4.1 Data for Word2Vec -- 4.2 Approach -- 4.3 Findings -- 4.4 Implications -- 4.5 Human in the Loop -- 5 Comparison of Word Lists Between Word2Vec and ChatGPT -- 5.1 Approach -- 5.2 Using ChatGPT to Facilitate List Analysis -- 5.3 Findings -- 5.4 Implications -- 6 Building a Dictionary -- 6.1 Carbon Footprint -- 6.2 Other Environmental Dictionaries -- 7 Positive, Negative or Action Dictionaries -- 8 Summary, Contributions and Extensions -- 8.1 Contributions -- 8.2 Extensions -- References -- Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice -- 1 Motivations and Background -- 2 Methods -- 2.1 Statistical Analysis -- 3 Results -- 3.1 RQ1: Impact on Decision Performance -- 3.2 RQ2: Impact on Decision Confidence -- 3.3 RQ3: Impact on Perceived Utility -- 4 Discussion -- 5 Conclusions -- References -- Enhancing Trust in Machine Learning Systems by Formal Methods -- 1 Introduction -- 2 State of the Art -- 3 What is an "Explanation"? -- 3.1 A Brief Survey of the Term Explanation in the Philosophy of Science -- 3.2 Defining Explanation for Systems Based on Machine Learning | |
505 | 8 | |a 4 Generating an Explanation -- 5 Applying the Method to a Meteorological Example -- 5.1 Description of the Meteorological Problem -- 5.2 Machine Learning Approach -- 5.3 Constructing the Explanation -- 6 Conclusions -- References -- Sustainability Effects of Robust and Resilient Artificial Intelligence -- 1 Introduction -- 2 Research Method -- 3 Robust and Resilient Artificial Intelligence -- 4 Sustainability Effects -- 4.1 Direct Sustainability Effects -- 4.2 Sustainability Effects in Selected Application Areas -- 5 Conclusion -- References -- The Split Matters: Flat Minima Methods for Improving the Performance of GNNs -- 1 Introduction -- 2 Related Work -- 2.1 Searching for Flat Minima -- 2.2 Graph Neural Networks -- 3 Flat Minima Methods -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameters -- 4.4 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Insights -- 6.2 Combining up to Three Flat Minima Methods -- 6.3 Influence of Dataset Splits -- 6.4 Transductive vs. Inductive Training -- 6.5 Detailed Discussion of Graph-MLP -- 6.6 Assumptions and Limitations -- 7 Conclusion -- A Hyperparameters -- A.1 Base Models -- A.2 Flat Minima Methods -- B Standard Deviations of Results -- References -- Probabilistic Framework Based on Deep Learning for Differentiating Ultrasound Movie View Planes -- 1 Introduction -- 2 Materials and Methods -- 2.1 The Deep Learning Algorithm -- 2.2 Fetal Abdomen Dataset -- 3 Results -- 4 Weighted Voted System -- 5 Discussion -- 6 Conclusions -- References -- Standing Still Is Not an Option: Alternative Baselines for Attainable Utility Preservation -- 1 Introduction -- 2 Related Work -- 3 Attainable Utility Preservation -- 4 Methods -- 5 Experimental Design -- 5.1 Environments -- 5.2 General Settings -- 6 Results -- 6.1 Comparison to AUP -- 6.2 Dropping the No-Op Action -- 7 Discussion -- 8 Conclusion | |
505 | 8 | |a 8.1 Future Work -- References -- Memorization of Named Entities in Fine-Tuned BERT Models -- 1 Introduction -- 2 Related Work -- 2.1 Language Models and Text Generation -- 2.2 Privacy Attacks in Machine Learning -- 2.3 Privacy Preserving Deep Learning -- 3 Extracting Named Entities from BERT -- 3.1 Fine-Tuning -- 3.2 Text Generation -- 3.3 Evaluating Named Entity Memorization -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure and Implementation -- 4.3 Hyperparameter Optimization -- 4.4 Measures -- 5 Results -- 5.1 Classification -- 5.2 Named Entity Memorization -- 6 Discussion -- 6.1 Key Insights -- 6.2 Generalization -- 6.3 Threats to Validity -- 7 Conclusion -- References -- Event and Entity Extraction from Generated Video Captions -- 1 Introduction -- 2 Related Work -- 2.1 Dense Video Captioning -- 2.2 Text Information Extraction and Classification -- 3 Semantic Metadata Extraction from Videos -- 3.1 Dense Video Captioning (DVC) -- 3.2 Event Processing -- 3.3 Language Processing -- 3.4 Entity Extraction -- 3.5 Property Extraction -- 3.6 Relation Extraction -- 3.7 Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameter Optimization -- 4.4 Measures and Metrics -- 5 Results -- 5.1 Dense Video Captioning -- 5.2 Entity Extraction -- 5.3 Property Extraction -- 5.4 Relation Extraction -- 5.5 Text Classification -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity and Future Work -- 7 Conclusion -- References -- Fine-Tuning Language Models for Scientific Writing Support -- 1 Introduction -- 2 Related Work -- 2.1 Pre-trained Encoder Language Models -- 2.2 Pre-trained Decoder Language Models -- 2.3 Text Classification -- 2.4 Sentence Transformation and Paraphrasing -- 2.5 Tools to Improve Writing Quality -- 3 Experimental Apparatus -- 3.1 Datasets -- 3.2 Preprocessing -- 3.3 Procedure | |
505 | 8 | |a 3.4 Hyperparameter Optimization | |
650 | 4 | |a Knowledge management-Congresses | |
650 | 4 | |a Machine learning-Congresses | |
650 | 0 | 7 | |a Wissensextraktion |0 (DE-588)4546354-2 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
655 | 7 | |0 (DE-588)1071861417 |a Konferenzschrift |y 2023 |z Benevent |2 gnd-content | |
689 | 0 | 0 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 0 | 1 | |a Wissensextraktion |0 (DE-588)4546354-2 |D s |
689 | 0 | |5 DE-604 | |
700 | 1 | |a Kieseberg, Peter |e Sonstige |4 oth | |
700 | 1 | |a Cabitza, Federico |e Sonstige |4 oth | |
700 | 1 | |a Campagner, Andrea |e Sonstige |4 oth | |
700 | 1 | |a Tjoa, A. Min |e Sonstige |4 oth | |
700 | 1 | |a Weippl, Edgar |e Sonstige |4 oth | |
776 | 0 | 8 | |i Erscheint auch als |n Druck-Ausgabe |a Holzinger, Andreas |t Machine Learning and Knowledge Extraction |d Cham : Springer,c2023 |z 9783031408366 |
912 | |a ZDB-30-PQE | ||
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-035437721 | |
966 | e | |u https://ebookcentral.proquest.com/lib/hwr/detail.action?docID=30717195 |l DE-2070s |p ZDB-30-PQE |q HWR_PDA_PQE |x Aggregator |3 Volltext |
Datensatz im Suchindex
_version_ | 1820889672236138496 |
---|---|
adam_text | |
any_adam_object | |
author | Holzinger, Andreas |
author_facet | Holzinger, Andreas |
author_role | aut |
author_sort | Holzinger, Andreas |
author_variant | a h ah |
building | Verbundindex |
bvnumber | BV050100559 |
classification_rvk | SS 4800 |
collection | ZDB-30-PQE |
contents | Intro -- Preface -- Organization -- Contents -- About the Editors -- Controllable AI - An Alternative to Trustworthiness in Complex AI Systems? -- 1 Introduction and Motivation -- 2 Background -- 2.1 The Explainability Problem -- 2.2 Trustworthy AI -- 2.3 The AI Act -- 3 Principles of Controllable AI -- 4 Techniques for Controllable AI -- 4.1 Detecting Control Loss -- 4.2 Managing Control Loss -- 4.3 Support Measures -- 5 Discussion -- 6 Conclusion and Outlook for Future Research -- References -- Efficient Approximation of Asymmetric Shapley Values Using Functional Decomposition -- 1 Introduction -- 1.1 Shapley Values -- 1.2 Asymmetric Shapley Values -- 1.3 PDD-SHAP -- 2 A-PDD-SHAP -- 3 Experiments -- 3.1 Causal Explanations of Unfair Discrimination -- 3.2 Evaluation on Real-World Datasets -- 4 Summary and Outlook -- 5 Conclusion -- A Proof of Theorem 1 -- References -- Domain-Specific Evaluation of Visual Explanations for Application-Grounded Facial Expression Recognition -- 1 Introduction -- 2 Related Work -- 3 Materials and Methods -- 3.1 Evaluation Framework Overview -- 3.2 Step 1: Data Set and Model Selection -- 3.3 Step 2: Model Performance Analysis -- 3.4 Step 3: Visual Classification Explanations -- 3.5 Step 4: Domain-Specific Evaluation Based on Landmarks -- 4 Results and Discussion -- 5 Conclusion -- References -- Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning -- 1 Introduction and Motivation -- 2 Background and Related Work -- 2.1 Explainable AI on Graph Neural Networks -- 2.2 Federated Learning -- 2.3 Knowledge Graphs -- 2.4 Human-in-the-Loop -- 3 Methods, Solutions and Implementations -- 3.1 Disease Subnetwork Detection -- 3.2 Explainability -- 3.3 Knowledge Graph -- 3.4 Federated Ensemble Learning with GNNs -- 3.5 interaCtive expLainable plAtform for gRaph neUral networkS (CLARUS) 4 Lessons Learned -- 5 Conclusion and Future Outlook -- References -- The Tower of Babel in Explainable Artificial Intelligence (XAI) -- 1 Introduction and Motivation -- 2 Ethics Guidelines and XAI -- 3 Law and XAI -- 3.1 GDPR -- 3.2 Digital Services Act (DSA) -- 3.3 The (Proposed) Artificial Intelligence Act (AIA) -- 4 Standardization and XAI -- 5 The Link Between Law and Standardization -- 6 A Proposed Solution -- 7 Conclusion -- References -- Hyper-Stacked: Scalable and Distributed Approach to AutoML for Big Data -- 1 Introduction -- 2 Background and Related Work -- 2.1 Problem Definition -- 2.2 CASH Methods -- 2.3 Ensemble Learning -- 2.4 Meta-learning -- 2.5 Spark -- 3 Hyper-Stacked: A Scalable and Distributed Approach to AutoML for Big Data -- 3.1 Motivation -- 3.2 Hyper-Stacked's Design and Workflow -- 4 Experimental Design -- 4.1 Binary Supervised Learning Problems -- 4.2 Experimental Setups -- 4.3 Experiment Speedup, Sizeup, Scaleup -- 5 Analysis of Results -- 5.1 Speedup -- 5.2 Sizeup -- 5.3 Scaleup -- 6 Conclusions -- References -- Transformers are Short-Text Classifiers -- 1 Introduction -- 2 Related Work -- 2.1 Sequence-Based Models -- 2.2 Graph-Based Models -- 2.3 Short Text Models -- 2.4 Summary -- 3 Selected Models for Our Comparison -- 3.1 Models for Short Text Classification -- 3.2 Top-Performing Models for Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Preprocessing -- 4.3 Procedure -- 4.4 Hyperparameter Optimization -- 4.5 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity -- 6.3 Parameter Count of Models -- 6.4 Generalization -- 7 Conclusion and Future Work -- References -- Reinforcement Learning with Temporal-Logic-Based Causal Diagrams -- 1 Introduction -- 2 Motivating Example -- 3 Related Work -- 4 Preliminaries -- 5 Temporal-Logic-Based Causal Diagrams 6 Reinforcement Learning with Causal Diagrams -- 6.1 Q-Learning with Early Stopping -- 7 Case Studies -- 7.1 Case Study I: Small Office World Domain -- 7.2 Case Study II: Large Office World Domain -- 7.3 Case Study III: Crossroad Domain -- 8 Conclusions and Discussions -- References -- Using Machine Learning to Generate a Dictionary for Environmental Issues -- 1 Introduction -- 1.1 Findings -- 1.2 This Paper -- 2 Background -- 2.1 Word2Vec - CBOW and Skip-Gram -- 2.2 ChatGPT -- 2.3 Business Text - Form 10K and Earning Call Conferences -- 2.4 Role of Human in the Loop -- 3 Bag of Words -- 3.1 Concepts as Ontologies Represented as Bag of Words -- 3.2 Example System - LIWC -- 3.3 Concept of Interest - Carbon Footprint -- 4 Word2Vec - Single Words -- 4.1 Data for Word2Vec -- 4.2 Approach -- 4.3 Findings -- 4.4 Implications -- 4.5 Human in the Loop -- 5 Comparison of Word Lists Between Word2Vec and ChatGPT -- 5.1 Approach -- 5.2 Using ChatGPT to Facilitate List Analysis -- 5.3 Findings -- 5.4 Implications -- 6 Building a Dictionary -- 6.1 Carbon Footprint -- 6.2 Other Environmental Dictionaries -- 7 Positive, Negative or Action Dictionaries -- 8 Summary, Contributions and Extensions -- 8.1 Contributions -- 8.2 Extensions -- References -- Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice -- 1 Motivations and Background -- 2 Methods -- 2.1 Statistical Analysis -- 3 Results -- 3.1 RQ1: Impact on Decision Performance -- 3.2 RQ2: Impact on Decision Confidence -- 3.3 RQ3: Impact on Perceived Utility -- 4 Discussion -- 5 Conclusions -- References -- Enhancing Trust in Machine Learning Systems by Formal Methods -- 1 Introduction -- 2 State of the Art -- 3 What is an "Explanation"? -- 3.1 A Brief Survey of the Term Explanation in the Philosophy of Science -- 3.2 Defining Explanation for Systems Based on Machine Learning 4 Generating an Explanation -- 5 Applying the Method to a Meteorological Example -- 5.1 Description of the Meteorological Problem -- 5.2 Machine Learning Approach -- 5.3 Constructing the Explanation -- 6 Conclusions -- References -- Sustainability Effects of Robust and Resilient Artificial Intelligence -- 1 Introduction -- 2 Research Method -- 3 Robust and Resilient Artificial Intelligence -- 4 Sustainability Effects -- 4.1 Direct Sustainability Effects -- 4.2 Sustainability Effects in Selected Application Areas -- 5 Conclusion -- References -- The Split Matters: Flat Minima Methods for Improving the Performance of GNNs -- 1 Introduction -- 2 Related Work -- 2.1 Searching for Flat Minima -- 2.2 Graph Neural Networks -- 3 Flat Minima Methods -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameters -- 4.4 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Insights -- 6.2 Combining up to Three Flat Minima Methods -- 6.3 Influence of Dataset Splits -- 6.4 Transductive vs. Inductive Training -- 6.5 Detailed Discussion of Graph-MLP -- 6.6 Assumptions and Limitations -- 7 Conclusion -- A Hyperparameters -- A.1 Base Models -- A.2 Flat Minima Methods -- B Standard Deviations of Results -- References -- Probabilistic Framework Based on Deep Learning for Differentiating Ultrasound Movie View Planes -- 1 Introduction -- 2 Materials and Methods -- 2.1 The Deep Learning Algorithm -- 2.2 Fetal Abdomen Dataset -- 3 Results -- 4 Weighted Voted System -- 5 Discussion -- 6 Conclusions -- References -- Standing Still Is Not an Option: Alternative Baselines for Attainable Utility Preservation -- 1 Introduction -- 2 Related Work -- 3 Attainable Utility Preservation -- 4 Methods -- 5 Experimental Design -- 5.1 Environments -- 5.2 General Settings -- 6 Results -- 6.1 Comparison to AUP -- 6.2 Dropping the No-Op Action -- 7 Discussion -- 8 Conclusion 8.1 Future Work -- References -- Memorization of Named Entities in Fine-Tuned BERT Models -- 1 Introduction -- 2 Related Work -- 2.1 Language Models and Text Generation -- 2.2 Privacy Attacks in Machine Learning -- 2.3 Privacy Preserving Deep Learning -- 3 Extracting Named Entities from BERT -- 3.1 Fine-Tuning -- 3.2 Text Generation -- 3.3 Evaluating Named Entity Memorization -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure and Implementation -- 4.3 Hyperparameter Optimization -- 4.4 Measures -- 5 Results -- 5.1 Classification -- 5.2 Named Entity Memorization -- 6 Discussion -- 6.1 Key Insights -- 6.2 Generalization -- 6.3 Threats to Validity -- 7 Conclusion -- References -- Event and Entity Extraction from Generated Video Captions -- 1 Introduction -- 2 Related Work -- 2.1 Dense Video Captioning -- 2.2 Text Information Extraction and Classification -- 3 Semantic Metadata Extraction from Videos -- 3.1 Dense Video Captioning (DVC) -- 3.2 Event Processing -- 3.3 Language Processing -- 3.4 Entity Extraction -- 3.5 Property Extraction -- 3.6 Relation Extraction -- 3.7 Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameter Optimization -- 4.4 Measures and Metrics -- 5 Results -- 5.1 Dense Video Captioning -- 5.2 Entity Extraction -- 5.3 Property Extraction -- 5.4 Relation Extraction -- 5.5 Text Classification -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity and Future Work -- 7 Conclusion -- References -- Fine-Tuning Language Models for Scientific Writing Support -- 1 Introduction -- 2 Related Work -- 2.1 Pre-trained Encoder Language Models -- 2.2 Pre-trained Decoder Language Models -- 2.3 Text Classification -- 2.4 Sentence Transformation and Paraphrasing -- 2.5 Tools to Improve Writing Quality -- 3 Experimental Apparatus -- 3.1 Datasets -- 3.2 Preprocessing -- 3.3 Procedure 3.4 Hyperparameter Optimization |
ctrlnum | (ZDB-30-PQE)EBC30717195 (ZDB-30-PAD)EBC30717195 (ZDB-89-EBL)EBL30717195 (OCoLC)1395507863 (DE-599)BVBBV050100559 |
dewey-full | 016.34951249 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 016 - Of works on specific subjects |
dewey-raw | 016.34951249 |
dewey-search | 016.34951249 |
dewey-sort | 216.34951249 |
dewey-tens | 010 - Bibliography |
discipline | Allgemeines Informatik |
edition | 1st ed |
format | Electronic eBook |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>00000nam a2200000zcb4500</leader><controlfield tag="001">BV050100559</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="007">cr|uuu---uuuuu</controlfield><controlfield tag="008">241218s2023 xx o|||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9783031408373</subfield><subfield code="9">978-3-031-40837-3</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-30-PQE)EBC30717195</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-30-PAD)EBC30717195</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ZDB-89-EBL)EBL30717195</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)1395507863</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV050100559</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-2070s</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">016.34951249</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SS 4800</subfield><subfield code="0">(DE-625)143528:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Holzinger, Andreas</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Machine Learning and Knowledge Extraction</subfield><subfield code="b">7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">1st ed</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Cham</subfield><subfield code="b">Springer</subfield><subfield code="c">2023</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">©2023</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 Online-Ressource (335 Seiten)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Lecture Notes in Computer Science Series</subfield><subfield code="v">v.14065</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Description based on publisher supplied metadata and other sources</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">Intro -- Preface -- Organization -- Contents -- About the Editors -- Controllable AI - An Alternative to Trustworthiness in Complex AI Systems? -- 1 Introduction and Motivation -- 2 Background -- 2.1 The Explainability Problem -- 2.2 Trustworthy AI -- 2.3 The AI Act -- 3 Principles of Controllable AI -- 4 Techniques for Controllable AI -- 4.1 Detecting Control Loss -- 4.2 Managing Control Loss -- 4.3 Support Measures -- 5 Discussion -- 6 Conclusion and Outlook for Future Research -- References -- Efficient Approximation of Asymmetric Shapley Values Using Functional Decomposition -- 1 Introduction -- 1.1 Shapley Values -- 1.2 Asymmetric Shapley Values -- 1.3 PDD-SHAP -- 2 A-PDD-SHAP -- 3 Experiments -- 3.1 Causal Explanations of Unfair Discrimination -- 3.2 Evaluation on Real-World Datasets -- 4 Summary and Outlook -- 5 Conclusion -- A Proof of Theorem 1 -- References -- Domain-Specific Evaluation of Visual Explanations for Application-Grounded Facial Expression Recognition -- 1 Introduction -- 2 Related Work -- 3 Materials and Methods -- 3.1 Evaluation Framework Overview -- 3.2 Step 1: Data Set and Model Selection -- 3.3 Step 2: Model Performance Analysis -- 3.4 Step 3: Visual Classification Explanations -- 3.5 Step 4: Domain-Specific Evaluation Based on Landmarks -- 4 Results and Discussion -- 5 Conclusion -- References -- Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning -- 1 Introduction and Motivation -- 2 Background and Related Work -- 2.1 Explainable AI on Graph Neural Networks -- 2.2 Federated Learning -- 2.3 Knowledge Graphs -- 2.4 Human-in-the-Loop -- 3 Methods, Solutions and Implementations -- 3.1 Disease Subnetwork Detection -- 3.2 Explainability -- 3.3 Knowledge Graph -- 3.4 Federated Ensemble Learning with GNNs -- 3.5 interaCtive expLainable plAtform for gRaph neUral networkS (CLARUS)</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">4 Lessons Learned -- 5 Conclusion and Future Outlook -- References -- The Tower of Babel in Explainable Artificial Intelligence (XAI) -- 1 Introduction and Motivation -- 2 Ethics Guidelines and XAI -- 3 Law and XAI -- 3.1 GDPR -- 3.2 Digital Services Act (DSA) -- 3.3 The (Proposed) Artificial Intelligence Act (AIA) -- 4 Standardization and XAI -- 5 The Link Between Law and Standardization -- 6 A Proposed Solution -- 7 Conclusion -- References -- Hyper-Stacked: Scalable and Distributed Approach to AutoML for Big Data -- 1 Introduction -- 2 Background and Related Work -- 2.1 Problem Definition -- 2.2 CASH Methods -- 2.3 Ensemble Learning -- 2.4 Meta-learning -- 2.5 Spark -- 3 Hyper-Stacked: A Scalable and Distributed Approach to AutoML for Big Data -- 3.1 Motivation -- 3.2 Hyper-Stacked's Design and Workflow -- 4 Experimental Design -- 4.1 Binary Supervised Learning Problems -- 4.2 Experimental Setups -- 4.3 Experiment Speedup, Sizeup, Scaleup -- 5 Analysis of Results -- 5.1 Speedup -- 5.2 Sizeup -- 5.3 Scaleup -- 6 Conclusions -- References -- Transformers are Short-Text Classifiers -- 1 Introduction -- 2 Related Work -- 2.1 Sequence-Based Models -- 2.2 Graph-Based Models -- 2.3 Short Text Models -- 2.4 Summary -- 3 Selected Models for Our Comparison -- 3.1 Models for Short Text Classification -- 3.2 Top-Performing Models for Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Preprocessing -- 4.3 Procedure -- 4.4 Hyperparameter Optimization -- 4.5 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity -- 6.3 Parameter Count of Models -- 6.4 Generalization -- 7 Conclusion and Future Work -- References -- Reinforcement Learning with Temporal-Logic-Based Causal Diagrams -- 1 Introduction -- 2 Motivating Example -- 3 Related Work -- 4 Preliminaries -- 5 Temporal-Logic-Based Causal Diagrams</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">6 Reinforcement Learning with Causal Diagrams -- 6.1 Q-Learning with Early Stopping -- 7 Case Studies -- 7.1 Case Study I: Small Office World Domain -- 7.2 Case Study II: Large Office World Domain -- 7.3 Case Study III: Crossroad Domain -- 8 Conclusions and Discussions -- References -- Using Machine Learning to Generate a Dictionary for Environmental Issues -- 1 Introduction -- 1.1 Findings -- 1.2 This Paper -- 2 Background -- 2.1 Word2Vec - CBOW and Skip-Gram -- 2.2 ChatGPT -- 2.3 Business Text - Form 10K and Earning Call Conferences -- 2.4 Role of Human in the Loop -- 3 Bag of Words -- 3.1 Concepts as Ontologies Represented as Bag of Words -- 3.2 Example System - LIWC -- 3.3 Concept of Interest - Carbon Footprint -- 4 Word2Vec - Single Words -- 4.1 Data for Word2Vec -- 4.2 Approach -- 4.3 Findings -- 4.4 Implications -- 4.5 Human in the Loop -- 5 Comparison of Word Lists Between Word2Vec and ChatGPT -- 5.1 Approach -- 5.2 Using ChatGPT to Facilitate List Analysis -- 5.3 Findings -- 5.4 Implications -- 6 Building a Dictionary -- 6.1 Carbon Footprint -- 6.2 Other Environmental Dictionaries -- 7 Positive, Negative or Action Dictionaries -- 8 Summary, Contributions and Extensions -- 8.1 Contributions -- 8.2 Extensions -- References -- Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice -- 1 Motivations and Background -- 2 Methods -- 2.1 Statistical Analysis -- 3 Results -- 3.1 RQ1: Impact on Decision Performance -- 3.2 RQ2: Impact on Decision Confidence -- 3.3 RQ3: Impact on Perceived Utility -- 4 Discussion -- 5 Conclusions -- References -- Enhancing Trust in Machine Learning Systems by Formal Methods -- 1 Introduction -- 2 State of the Art -- 3 What is an "Explanation"? -- 3.1 A Brief Survey of the Term Explanation in the Philosophy of Science -- 3.2 Defining Explanation for Systems Based on Machine Learning</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">4 Generating an Explanation -- 5 Applying the Method to a Meteorological Example -- 5.1 Description of the Meteorological Problem -- 5.2 Machine Learning Approach -- 5.3 Constructing the Explanation -- 6 Conclusions -- References -- Sustainability Effects of Robust and Resilient Artificial Intelligence -- 1 Introduction -- 2 Research Method -- 3 Robust and Resilient Artificial Intelligence -- 4 Sustainability Effects -- 4.1 Direct Sustainability Effects -- 4.2 Sustainability Effects in Selected Application Areas -- 5 Conclusion -- References -- The Split Matters: Flat Minima Methods for Improving the Performance of GNNs -- 1 Introduction -- 2 Related Work -- 2.1 Searching for Flat Minima -- 2.2 Graph Neural Networks -- 3 Flat Minima Methods -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameters -- 4.4 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Insights -- 6.2 Combining up to Three Flat Minima Methods -- 6.3 Influence of Dataset Splits -- 6.4 Transductive vs. Inductive Training -- 6.5 Detailed Discussion of Graph-MLP -- 6.6 Assumptions and Limitations -- 7 Conclusion -- A Hyperparameters -- A.1 Base Models -- A.2 Flat Minima Methods -- B Standard Deviations of Results -- References -- Probabilistic Framework Based on Deep Learning for Differentiating Ultrasound Movie View Planes -- 1 Introduction -- 2 Materials and Methods -- 2.1 The Deep Learning Algorithm -- 2.2 Fetal Abdomen Dataset -- 3 Results -- 4 Weighted Voted System -- 5 Discussion -- 6 Conclusions -- References -- Standing Still Is Not an Option: Alternative Baselines for Attainable Utility Preservation -- 1 Introduction -- 2 Related Work -- 3 Attainable Utility Preservation -- 4 Methods -- 5 Experimental Design -- 5.1 Environments -- 5.2 General Settings -- 6 Results -- 6.1 Comparison to AUP -- 6.2 Dropping the No-Op Action -- 7 Discussion -- 8 Conclusion</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">8.1 Future Work -- References -- Memorization of Named Entities in Fine-Tuned BERT Models -- 1 Introduction -- 2 Related Work -- 2.1 Language Models and Text Generation -- 2.2 Privacy Attacks in Machine Learning -- 2.3 Privacy Preserving Deep Learning -- 3 Extracting Named Entities from BERT -- 3.1 Fine-Tuning -- 3.2 Text Generation -- 3.3 Evaluating Named Entity Memorization -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure and Implementation -- 4.3 Hyperparameter Optimization -- 4.4 Measures -- 5 Results -- 5.1 Classification -- 5.2 Named Entity Memorization -- 6 Discussion -- 6.1 Key Insights -- 6.2 Generalization -- 6.3 Threats to Validity -- 7 Conclusion -- References -- Event and Entity Extraction from Generated Video Captions -- 1 Introduction -- 2 Related Work -- 2.1 Dense Video Captioning -- 2.2 Text Information Extraction and Classification -- 3 Semantic Metadata Extraction from Videos -- 3.1 Dense Video Captioning (DVC) -- 3.2 Event Processing -- 3.3 Language Processing -- 3.4 Entity Extraction -- 3.5 Property Extraction -- 3.6 Relation Extraction -- 3.7 Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameter Optimization -- 4.4 Measures and Metrics -- 5 Results -- 5.1 Dense Video Captioning -- 5.2 Entity Extraction -- 5.3 Property Extraction -- 5.4 Relation Extraction -- 5.5 Text Classification -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity and Future Work -- 7 Conclusion -- References -- Fine-Tuning Language Models for Scientific Writing Support -- 1 Introduction -- 2 Related Work -- 2.1 Pre-trained Encoder Language Models -- 2.2 Pre-trained Decoder Language Models -- 2.3 Text Classification -- 2.4 Sentence Transformation and Paraphrasing -- 2.5 Tools to Improve Writing Quality -- 3 Experimental Apparatus -- 3.1 Datasets -- 3.2 Preprocessing -- 3.3 Procedure</subfield></datafield><datafield tag="505" ind1="8" ind2=" "><subfield code="a">3.4 Hyperparameter Optimization</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Knowledge management-Congresses</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Machine learning-Congresses</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Wissensextraktion</subfield><subfield code="0">(DE-588)4546354-2</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="0">(DE-588)1071861417</subfield><subfield code="a">Konferenzschrift</subfield><subfield code="y">2023</subfield><subfield code="z">Benevent</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Wissensextraktion</subfield><subfield code="0">(DE-588)4546354-2</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kieseberg, Peter</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cabitza, Federico</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Campagner, Andrea</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tjoa, A. Min</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Weippl, Edgar</subfield><subfield code="e">Sonstige</subfield><subfield code="4">oth</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Druck-Ausgabe</subfield><subfield code="a">Holzinger, Andreas</subfield><subfield code="t">Machine Learning and Knowledge Extraction</subfield><subfield code="d">Cham : Springer,c2023</subfield><subfield code="z">9783031408366</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">ZDB-30-PQE</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-035437721</subfield></datafield><datafield tag="966" ind1="e" ind2=" "><subfield code="u">https://ebookcentral.proquest.com/lib/hwr/detail.action?docID=30717195</subfield><subfield code="l">DE-2070s</subfield><subfield code="p">ZDB-30-PQE</subfield><subfield code="q">HWR_PDA_PQE</subfield><subfield code="x">Aggregator</subfield><subfield code="3">Volltext</subfield></datafield></record></collection> |
genre | (DE-588)1071861417 Konferenzschrift 2023 Benevent gnd-content |
genre_facet | Konferenzschrift 2023 Benevent |
id | DE-604.BV050100559 |
illustrated | Not Illustrated |
indexdate | 2025-01-10T19:01:53Z |
institution | BVB |
isbn | 9783031408373 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-035437721 |
oclc_num | 1395507863 |
open_access_boolean | |
owner | DE-2070s |
owner_facet | DE-2070s |
physical | 1 Online-Ressource (335 Seiten) |
psigel | ZDB-30-PQE ZDB-30-PQE HWR_PDA_PQE |
publishDate | 2023 |
publishDateSearch | 2023 |
publishDateSort | 2023 |
publisher | Springer |
record_format | marc |
series2 | Lecture Notes in Computer Science Series |
spelling | Holzinger, Andreas Verfasser aut Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings 1st ed Cham Springer 2023 ©2023 1 Online-Ressource (335 Seiten) txt rdacontent c rdamedia cr rdacarrier Lecture Notes in Computer Science Series v.14065 Description based on publisher supplied metadata and other sources Intro -- Preface -- Organization -- Contents -- About the Editors -- Controllable AI - An Alternative to Trustworthiness in Complex AI Systems? -- 1 Introduction and Motivation -- 2 Background -- 2.1 The Explainability Problem -- 2.2 Trustworthy AI -- 2.3 The AI Act -- 3 Principles of Controllable AI -- 4 Techniques for Controllable AI -- 4.1 Detecting Control Loss -- 4.2 Managing Control Loss -- 4.3 Support Measures -- 5 Discussion -- 6 Conclusion and Outlook for Future Research -- References -- Efficient Approximation of Asymmetric Shapley Values Using Functional Decomposition -- 1 Introduction -- 1.1 Shapley Values -- 1.2 Asymmetric Shapley Values -- 1.3 PDD-SHAP -- 2 A-PDD-SHAP -- 3 Experiments -- 3.1 Causal Explanations of Unfair Discrimination -- 3.2 Evaluation on Real-World Datasets -- 4 Summary and Outlook -- 5 Conclusion -- A Proof of Theorem 1 -- References -- Domain-Specific Evaluation of Visual Explanations for Application-Grounded Facial Expression Recognition -- 1 Introduction -- 2 Related Work -- 3 Materials and Methods -- 3.1 Evaluation Framework Overview -- 3.2 Step 1: Data Set and Model Selection -- 3.3 Step 2: Model Performance Analysis -- 3.4 Step 3: Visual Classification Explanations -- 3.5 Step 4: Domain-Specific Evaluation Based on Landmarks -- 4 Results and Discussion -- 5 Conclusion -- References -- Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning -- 1 Introduction and Motivation -- 2 Background and Related Work -- 2.1 Explainable AI on Graph Neural Networks -- 2.2 Federated Learning -- 2.3 Knowledge Graphs -- 2.4 Human-in-the-Loop -- 3 Methods, Solutions and Implementations -- 3.1 Disease Subnetwork Detection -- 3.2 Explainability -- 3.3 Knowledge Graph -- 3.4 Federated Ensemble Learning with GNNs -- 3.5 interaCtive expLainable plAtform for gRaph neUral networkS (CLARUS) 4 Lessons Learned -- 5 Conclusion and Future Outlook -- References -- The Tower of Babel in Explainable Artificial Intelligence (XAI) -- 1 Introduction and Motivation -- 2 Ethics Guidelines and XAI -- 3 Law and XAI -- 3.1 GDPR -- 3.2 Digital Services Act (DSA) -- 3.3 The (Proposed) Artificial Intelligence Act (AIA) -- 4 Standardization and XAI -- 5 The Link Between Law and Standardization -- 6 A Proposed Solution -- 7 Conclusion -- References -- Hyper-Stacked: Scalable and Distributed Approach to AutoML for Big Data -- 1 Introduction -- 2 Background and Related Work -- 2.1 Problem Definition -- 2.2 CASH Methods -- 2.3 Ensemble Learning -- 2.4 Meta-learning -- 2.5 Spark -- 3 Hyper-Stacked: A Scalable and Distributed Approach to AutoML for Big Data -- 3.1 Motivation -- 3.2 Hyper-Stacked's Design and Workflow -- 4 Experimental Design -- 4.1 Binary Supervised Learning Problems -- 4.2 Experimental Setups -- 4.3 Experiment Speedup, Sizeup, Scaleup -- 5 Analysis of Results -- 5.1 Speedup -- 5.2 Sizeup -- 5.3 Scaleup -- 6 Conclusions -- References -- Transformers are Short-Text Classifiers -- 1 Introduction -- 2 Related Work -- 2.1 Sequence-Based Models -- 2.2 Graph-Based Models -- 2.3 Short Text Models -- 2.4 Summary -- 3 Selected Models for Our Comparison -- 3.1 Models for Short Text Classification -- 3.2 Top-Performing Models for Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Preprocessing -- 4.3 Procedure -- 4.4 Hyperparameter Optimization -- 4.5 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity -- 6.3 Parameter Count of Models -- 6.4 Generalization -- 7 Conclusion and Future Work -- References -- Reinforcement Learning with Temporal-Logic-Based Causal Diagrams -- 1 Introduction -- 2 Motivating Example -- 3 Related Work -- 4 Preliminaries -- 5 Temporal-Logic-Based Causal Diagrams 6 Reinforcement Learning with Causal Diagrams -- 6.1 Q-Learning with Early Stopping -- 7 Case Studies -- 7.1 Case Study I: Small Office World Domain -- 7.2 Case Study II: Large Office World Domain -- 7.3 Case Study III: Crossroad Domain -- 8 Conclusions and Discussions -- References -- Using Machine Learning to Generate a Dictionary for Environmental Issues -- 1 Introduction -- 1.1 Findings -- 1.2 This Paper -- 2 Background -- 2.1 Word2Vec - CBOW and Skip-Gram -- 2.2 ChatGPT -- 2.3 Business Text - Form 10K and Earning Call Conferences -- 2.4 Role of Human in the Loop -- 3 Bag of Words -- 3.1 Concepts as Ontologies Represented as Bag of Words -- 3.2 Example System - LIWC -- 3.3 Concept of Interest - Carbon Footprint -- 4 Word2Vec - Single Words -- 4.1 Data for Word2Vec -- 4.2 Approach -- 4.3 Findings -- 4.4 Implications -- 4.5 Human in the Loop -- 5 Comparison of Word Lists Between Word2Vec and ChatGPT -- 5.1 Approach -- 5.2 Using ChatGPT to Facilitate List Analysis -- 5.3 Findings -- 5.4 Implications -- 6 Building a Dictionary -- 6.1 Carbon Footprint -- 6.2 Other Environmental Dictionaries -- 7 Positive, Negative or Action Dictionaries -- 8 Summary, Contributions and Extensions -- 8.1 Contributions -- 8.2 Extensions -- References -- Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice -- 1 Motivations and Background -- 2 Methods -- 2.1 Statistical Analysis -- 3 Results -- 3.1 RQ1: Impact on Decision Performance -- 3.2 RQ2: Impact on Decision Confidence -- 3.3 RQ3: Impact on Perceived Utility -- 4 Discussion -- 5 Conclusions -- References -- Enhancing Trust in Machine Learning Systems by Formal Methods -- 1 Introduction -- 2 State of the Art -- 3 What is an "Explanation"? -- 3.1 A Brief Survey of the Term Explanation in the Philosophy of Science -- 3.2 Defining Explanation for Systems Based on Machine Learning 4 Generating an Explanation -- 5 Applying the Method to a Meteorological Example -- 5.1 Description of the Meteorological Problem -- 5.2 Machine Learning Approach -- 5.3 Constructing the Explanation -- 6 Conclusions -- References -- Sustainability Effects of Robust and Resilient Artificial Intelligence -- 1 Introduction -- 2 Research Method -- 3 Robust and Resilient Artificial Intelligence -- 4 Sustainability Effects -- 4.1 Direct Sustainability Effects -- 4.2 Sustainability Effects in Selected Application Areas -- 5 Conclusion -- References -- The Split Matters: Flat Minima Methods for Improving the Performance of GNNs -- 1 Introduction -- 2 Related Work -- 2.1 Searching for Flat Minima -- 2.2 Graph Neural Networks -- 3 Flat Minima Methods -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameters -- 4.4 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Insights -- 6.2 Combining up to Three Flat Minima Methods -- 6.3 Influence of Dataset Splits -- 6.4 Transductive vs. Inductive Training -- 6.5 Detailed Discussion of Graph-MLP -- 6.6 Assumptions and Limitations -- 7 Conclusion -- A Hyperparameters -- A.1 Base Models -- A.2 Flat Minima Methods -- B Standard Deviations of Results -- References -- Probabilistic Framework Based on Deep Learning for Differentiating Ultrasound Movie View Planes -- 1 Introduction -- 2 Materials and Methods -- 2.1 The Deep Learning Algorithm -- 2.2 Fetal Abdomen Dataset -- 3 Results -- 4 Weighted Voted System -- 5 Discussion -- 6 Conclusions -- References -- Standing Still Is Not an Option: Alternative Baselines for Attainable Utility Preservation -- 1 Introduction -- 2 Related Work -- 3 Attainable Utility Preservation -- 4 Methods -- 5 Experimental Design -- 5.1 Environments -- 5.2 General Settings -- 6 Results -- 6.1 Comparison to AUP -- 6.2 Dropping the No-Op Action -- 7 Discussion -- 8 Conclusion 8.1 Future Work -- References -- Memorization of Named Entities in Fine-Tuned BERT Models -- 1 Introduction -- 2 Related Work -- 2.1 Language Models and Text Generation -- 2.2 Privacy Attacks in Machine Learning -- 2.3 Privacy Preserving Deep Learning -- 3 Extracting Named Entities from BERT -- 3.1 Fine-Tuning -- 3.2 Text Generation -- 3.3 Evaluating Named Entity Memorization -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure and Implementation -- 4.3 Hyperparameter Optimization -- 4.4 Measures -- 5 Results -- 5.1 Classification -- 5.2 Named Entity Memorization -- 6 Discussion -- 6.1 Key Insights -- 6.2 Generalization -- 6.3 Threats to Validity -- 7 Conclusion -- References -- Event and Entity Extraction from Generated Video Captions -- 1 Introduction -- 2 Related Work -- 2.1 Dense Video Captioning -- 2.2 Text Information Extraction and Classification -- 3 Semantic Metadata Extraction from Videos -- 3.1 Dense Video Captioning (DVC) -- 3.2 Event Processing -- 3.3 Language Processing -- 3.4 Entity Extraction -- 3.5 Property Extraction -- 3.6 Relation Extraction -- 3.7 Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameter Optimization -- 4.4 Measures and Metrics -- 5 Results -- 5.1 Dense Video Captioning -- 5.2 Entity Extraction -- 5.3 Property Extraction -- 5.4 Relation Extraction -- 5.5 Text Classification -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity and Future Work -- 7 Conclusion -- References -- Fine-Tuning Language Models for Scientific Writing Support -- 1 Introduction -- 2 Related Work -- 2.1 Pre-trained Encoder Language Models -- 2.2 Pre-trained Decoder Language Models -- 2.3 Text Classification -- 2.4 Sentence Transformation and Paraphrasing -- 2.5 Tools to Improve Writing Quality -- 3 Experimental Apparatus -- 3.1 Datasets -- 3.2 Preprocessing -- 3.3 Procedure 3.4 Hyperparameter Optimization Knowledge management-Congresses Machine learning-Congresses Wissensextraktion (DE-588)4546354-2 gnd rswk-swf Maschinelles Lernen (DE-588)4193754-5 gnd rswk-swf (DE-588)1071861417 Konferenzschrift 2023 Benevent gnd-content Maschinelles Lernen (DE-588)4193754-5 s Wissensextraktion (DE-588)4546354-2 s DE-604 Kieseberg, Peter Sonstige oth Cabitza, Federico Sonstige oth Campagner, Andrea Sonstige oth Tjoa, A. Min Sonstige oth Weippl, Edgar Sonstige oth Erscheint auch als Druck-Ausgabe Holzinger, Andreas Machine Learning and Knowledge Extraction Cham : Springer,c2023 9783031408366 |
spellingShingle | Holzinger, Andreas Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings Intro -- Preface -- Organization -- Contents -- About the Editors -- Controllable AI - An Alternative to Trustworthiness in Complex AI Systems? -- 1 Introduction and Motivation -- 2 Background -- 2.1 The Explainability Problem -- 2.2 Trustworthy AI -- 2.3 The AI Act -- 3 Principles of Controllable AI -- 4 Techniques for Controllable AI -- 4.1 Detecting Control Loss -- 4.2 Managing Control Loss -- 4.3 Support Measures -- 5 Discussion -- 6 Conclusion and Outlook for Future Research -- References -- Efficient Approximation of Asymmetric Shapley Values Using Functional Decomposition -- 1 Introduction -- 1.1 Shapley Values -- 1.2 Asymmetric Shapley Values -- 1.3 PDD-SHAP -- 2 A-PDD-SHAP -- 3 Experiments -- 3.1 Causal Explanations of Unfair Discrimination -- 3.2 Evaluation on Real-World Datasets -- 4 Summary and Outlook -- 5 Conclusion -- A Proof of Theorem 1 -- References -- Domain-Specific Evaluation of Visual Explanations for Application-Grounded Facial Expression Recognition -- 1 Introduction -- 2 Related Work -- 3 Materials and Methods -- 3.1 Evaluation Framework Overview -- 3.2 Step 1: Data Set and Model Selection -- 3.3 Step 2: Model Performance Analysis -- 3.4 Step 3: Visual Classification Explanations -- 3.5 Step 4: Domain-Specific Evaluation Based on Landmarks -- 4 Results and Discussion -- 5 Conclusion -- References -- Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning -- 1 Introduction and Motivation -- 2 Background and Related Work -- 2.1 Explainable AI on Graph Neural Networks -- 2.2 Federated Learning -- 2.3 Knowledge Graphs -- 2.4 Human-in-the-Loop -- 3 Methods, Solutions and Implementations -- 3.1 Disease Subnetwork Detection -- 3.2 Explainability -- 3.3 Knowledge Graph -- 3.4 Federated Ensemble Learning with GNNs -- 3.5 interaCtive expLainable plAtform for gRaph neUral networkS (CLARUS) 4 Lessons Learned -- 5 Conclusion and Future Outlook -- References -- The Tower of Babel in Explainable Artificial Intelligence (XAI) -- 1 Introduction and Motivation -- 2 Ethics Guidelines and XAI -- 3 Law and XAI -- 3.1 GDPR -- 3.2 Digital Services Act (DSA) -- 3.3 The (Proposed) Artificial Intelligence Act (AIA) -- 4 Standardization and XAI -- 5 The Link Between Law and Standardization -- 6 A Proposed Solution -- 7 Conclusion -- References -- Hyper-Stacked: Scalable and Distributed Approach to AutoML for Big Data -- 1 Introduction -- 2 Background and Related Work -- 2.1 Problem Definition -- 2.2 CASH Methods -- 2.3 Ensemble Learning -- 2.4 Meta-learning -- 2.5 Spark -- 3 Hyper-Stacked: A Scalable and Distributed Approach to AutoML for Big Data -- 3.1 Motivation -- 3.2 Hyper-Stacked's Design and Workflow -- 4 Experimental Design -- 4.1 Binary Supervised Learning Problems -- 4.2 Experimental Setups -- 4.3 Experiment Speedup, Sizeup, Scaleup -- 5 Analysis of Results -- 5.1 Speedup -- 5.2 Sizeup -- 5.3 Scaleup -- 6 Conclusions -- References -- Transformers are Short-Text Classifiers -- 1 Introduction -- 2 Related Work -- 2.1 Sequence-Based Models -- 2.2 Graph-Based Models -- 2.3 Short Text Models -- 2.4 Summary -- 3 Selected Models for Our Comparison -- 3.1 Models for Short Text Classification -- 3.2 Top-Performing Models for Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Preprocessing -- 4.3 Procedure -- 4.4 Hyperparameter Optimization -- 4.5 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity -- 6.3 Parameter Count of Models -- 6.4 Generalization -- 7 Conclusion and Future Work -- References -- Reinforcement Learning with Temporal-Logic-Based Causal Diagrams -- 1 Introduction -- 2 Motivating Example -- 3 Related Work -- 4 Preliminaries -- 5 Temporal-Logic-Based Causal Diagrams 6 Reinforcement Learning with Causal Diagrams -- 6.1 Q-Learning with Early Stopping -- 7 Case Studies -- 7.1 Case Study I: Small Office World Domain -- 7.2 Case Study II: Large Office World Domain -- 7.3 Case Study III: Crossroad Domain -- 8 Conclusions and Discussions -- References -- Using Machine Learning to Generate a Dictionary for Environmental Issues -- 1 Introduction -- 1.1 Findings -- 1.2 This Paper -- 2 Background -- 2.1 Word2Vec - CBOW and Skip-Gram -- 2.2 ChatGPT -- 2.3 Business Text - Form 10K and Earning Call Conferences -- 2.4 Role of Human in the Loop -- 3 Bag of Words -- 3.1 Concepts as Ontologies Represented as Bag of Words -- 3.2 Example System - LIWC -- 3.3 Concept of Interest - Carbon Footprint -- 4 Word2Vec - Single Words -- 4.1 Data for Word2Vec -- 4.2 Approach -- 4.3 Findings -- 4.4 Implications -- 4.5 Human in the Loop -- 5 Comparison of Word Lists Between Word2Vec and ChatGPT -- 5.1 Approach -- 5.2 Using ChatGPT to Facilitate List Analysis -- 5.3 Findings -- 5.4 Implications -- 6 Building a Dictionary -- 6.1 Carbon Footprint -- 6.2 Other Environmental Dictionaries -- 7 Positive, Negative or Action Dictionaries -- 8 Summary, Contributions and Extensions -- 8.1 Contributions -- 8.2 Extensions -- References -- Let Me Think! Investigating the Effect of Explanations Feeding Doubts About the AI Advice -- 1 Motivations and Background -- 2 Methods -- 2.1 Statistical Analysis -- 3 Results -- 3.1 RQ1: Impact on Decision Performance -- 3.2 RQ2: Impact on Decision Confidence -- 3.3 RQ3: Impact on Perceived Utility -- 4 Discussion -- 5 Conclusions -- References -- Enhancing Trust in Machine Learning Systems by Formal Methods -- 1 Introduction -- 2 State of the Art -- 3 What is an "Explanation"? -- 3.1 A Brief Survey of the Term Explanation in the Philosophy of Science -- 3.2 Defining Explanation for Systems Based on Machine Learning 4 Generating an Explanation -- 5 Applying the Method to a Meteorological Example -- 5.1 Description of the Meteorological Problem -- 5.2 Machine Learning Approach -- 5.3 Constructing the Explanation -- 6 Conclusions -- References -- Sustainability Effects of Robust and Resilient Artificial Intelligence -- 1 Introduction -- 2 Research Method -- 3 Robust and Resilient Artificial Intelligence -- 4 Sustainability Effects -- 4.1 Direct Sustainability Effects -- 4.2 Sustainability Effects in Selected Application Areas -- 5 Conclusion -- References -- The Split Matters: Flat Minima Methods for Improving the Performance of GNNs -- 1 Introduction -- 2 Related Work -- 2.1 Searching for Flat Minima -- 2.2 Graph Neural Networks -- 3 Flat Minima Methods -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameters -- 4.4 Metrics -- 5 Results -- 6 Discussion -- 6.1 Key Insights -- 6.2 Combining up to Three Flat Minima Methods -- 6.3 Influence of Dataset Splits -- 6.4 Transductive vs. Inductive Training -- 6.5 Detailed Discussion of Graph-MLP -- 6.6 Assumptions and Limitations -- 7 Conclusion -- A Hyperparameters -- A.1 Base Models -- A.2 Flat Minima Methods -- B Standard Deviations of Results -- References -- Probabilistic Framework Based on Deep Learning for Differentiating Ultrasound Movie View Planes -- 1 Introduction -- 2 Materials and Methods -- 2.1 The Deep Learning Algorithm -- 2.2 Fetal Abdomen Dataset -- 3 Results -- 4 Weighted Voted System -- 5 Discussion -- 6 Conclusions -- References -- Standing Still Is Not an Option: Alternative Baselines for Attainable Utility Preservation -- 1 Introduction -- 2 Related Work -- 3 Attainable Utility Preservation -- 4 Methods -- 5 Experimental Design -- 5.1 Environments -- 5.2 General Settings -- 6 Results -- 6.1 Comparison to AUP -- 6.2 Dropping the No-Op Action -- 7 Discussion -- 8 Conclusion 8.1 Future Work -- References -- Memorization of Named Entities in Fine-Tuned BERT Models -- 1 Introduction -- 2 Related Work -- 2.1 Language Models and Text Generation -- 2.2 Privacy Attacks in Machine Learning -- 2.3 Privacy Preserving Deep Learning -- 3 Extracting Named Entities from BERT -- 3.1 Fine-Tuning -- 3.2 Text Generation -- 3.3 Evaluating Named Entity Memorization -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure and Implementation -- 4.3 Hyperparameter Optimization -- 4.4 Measures -- 5 Results -- 5.1 Classification -- 5.2 Named Entity Memorization -- 6 Discussion -- 6.1 Key Insights -- 6.2 Generalization -- 6.3 Threats to Validity -- 7 Conclusion -- References -- Event and Entity Extraction from Generated Video Captions -- 1 Introduction -- 2 Related Work -- 2.1 Dense Video Captioning -- 2.2 Text Information Extraction and Classification -- 3 Semantic Metadata Extraction from Videos -- 3.1 Dense Video Captioning (DVC) -- 3.2 Event Processing -- 3.3 Language Processing -- 3.4 Entity Extraction -- 3.5 Property Extraction -- 3.6 Relation Extraction -- 3.7 Text Classification -- 4 Experimental Apparatus -- 4.1 Datasets -- 4.2 Procedure -- 4.3 Hyperparameter Optimization -- 4.4 Measures and Metrics -- 5 Results -- 5.1 Dense Video Captioning -- 5.2 Entity Extraction -- 5.3 Property Extraction -- 5.4 Relation Extraction -- 5.5 Text Classification -- 6 Discussion -- 6.1 Key Results -- 6.2 Threats to Validity and Future Work -- 7 Conclusion -- References -- Fine-Tuning Language Models for Scientific Writing Support -- 1 Introduction -- 2 Related Work -- 2.1 Pre-trained Encoder Language Models -- 2.2 Pre-trained Decoder Language Models -- 2.3 Text Classification -- 2.4 Sentence Transformation and Paraphrasing -- 2.5 Tools to Improve Writing Quality -- 3 Experimental Apparatus -- 3.1 Datasets -- 3.2 Preprocessing -- 3.3 Procedure 3.4 Hyperparameter Optimization Knowledge management-Congresses Machine learning-Congresses Wissensextraktion (DE-588)4546354-2 gnd Maschinelles Lernen (DE-588)4193754-5 gnd |
subject_GND | (DE-588)4546354-2 (DE-588)4193754-5 (DE-588)1071861417 |
title | Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings |
title_auth | Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings |
title_exact_search | Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings |
title_full | Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings |
title_fullStr | Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings |
title_full_unstemmed | Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings |
title_short | Machine Learning and Knowledge Extraction |
title_sort | machine learning and knowledge extraction 7th ifip tc 5 tc 12 wg 8 4 wg 8 9 wg 12 9 international cross domain conference cd make 2023 benevento italy august 29 september 1 2023 proceedings |
title_sub | 7th IFIP TC 5, TC 12, WG 8. 4, WG 8. 9, WG 12. 9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 - September 1, 2023, Proceedings |
topic | Knowledge management-Congresses Machine learning-Congresses Wissensextraktion (DE-588)4546354-2 gnd Maschinelles Lernen (DE-588)4193754-5 gnd |
topic_facet | Knowledge management-Congresses Machine learning-Congresses Wissensextraktion Maschinelles Lernen Konferenzschrift 2023 Benevent |
work_keys_str_mv | AT holzingerandreas machinelearningandknowledgeextraction7thifiptc5tc12wg84wg89wg129internationalcrossdomainconferencecdmake2023beneventoitalyaugust29september12023proceedings AT kiesebergpeter machinelearningandknowledgeextraction7thifiptc5tc12wg84wg89wg129internationalcrossdomainconferencecdmake2023beneventoitalyaugust29september12023proceedings AT cabitzafederico machinelearningandknowledgeextraction7thifiptc5tc12wg84wg89wg129internationalcrossdomainconferencecdmake2023beneventoitalyaugust29september12023proceedings AT campagnerandrea machinelearningandknowledgeextraction7thifiptc5tc12wg84wg89wg129internationalcrossdomainconferencecdmake2023beneventoitalyaugust29september12023proceedings AT tjoaamin machinelearningandknowledgeextraction7thifiptc5tc12wg84wg89wg129internationalcrossdomainconferencecdmake2023beneventoitalyaugust29september12023proceedings AT weippledgar machinelearningandknowledgeextraction7thifiptc5tc12wg84wg89wg129internationalcrossdomainconferencecdmake2023beneventoitalyaugust29september12023proceedings |