Understandable explanations: the FUDGE discourse generator:
Abstract: "This paper develops two principles for generating understandable explanations, and presents a simple computational system based on those principles. First, explanations should be planned so that, in principle, the concepts in them may be understood by the particular hearer. This invo...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Edinburgh
1990
|
Schriftenreihe: | University <Edinburgh> / Department of Artificial Intelligence: DAI research paper
473 |
Schlagworte: | |
Zusammenfassung: | Abstract: "This paper develops two principles for generating understandable explanations, and presents a simple computational system based on those principles. First, explanations should be planned so that, in principle, the concepts in them may be understood by the particular hearer. This involves making use of both general learning/teaching methods (such as by analogy or by example) and information about how different types of knowledge (such as procedures, objects or processes) may be described using these. Then, giving assumptions about the user's knowledge it should be possible to generate understandable explanations in a fairly principled manner Second, our model of the hearer's knowledge is very unlikely to be correct (though it may be a useful approximation). It is therefore vital to allow the user to signal whether or not they are following, and to ask clarification questions as necessary. In a complex explanation these may take place within the explanation, and influence the progress of the explanation. These principles motivated a revised version of the EDGE explanatory discourse generator [Cawsey 89, Cawsey 90], which we will call FUDGE (Fairly Understandable Discourse GEnerator). This paper introduces the FUDGE system. |
Beschreibung: | 9 S. |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV010451907 | ||
003 | DE-604 | ||
005 | 20020301 | ||
007 | t | ||
008 | 951026s1990 |||| 00||| eng d | ||
035 | |a (OCoLC)23725293 | ||
035 | |a (DE-599)BVBBV010451907 | ||
040 | |a DE-604 |b ger |e rakwb | ||
041 | 0 | |a eng | |
049 | |a DE-91G | ||
100 | 1 | |a Cawsey, Alison |e Verfasser |4 aut | |
245 | 1 | 0 | |a Understandable explanations: the FUDGE discourse generator |
264 | 1 | |a Edinburgh |c 1990 | |
300 | |a 9 S. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a University <Edinburgh> / Department of Artificial Intelligence: DAI research paper |v 473 | |
520 | 3 | |a Abstract: "This paper develops two principles for generating understandable explanations, and presents a simple computational system based on those principles. First, explanations should be planned so that, in principle, the concepts in them may be understood by the particular hearer. This involves making use of both general learning/teaching methods (such as by analogy or by example) and information about how different types of knowledge (such as procedures, objects or processes) may be described using these. Then, giving assumptions about the user's knowledge it should be possible to generate understandable explanations in a fairly principled manner | |
520 | 3 | |a Second, our model of the hearer's knowledge is very unlikely to be correct (though it may be a useful approximation). It is therefore vital to allow the user to signal whether or not they are following, and to ask clarification questions as necessary. In a complex explanation these may take place within the explanation, and influence the progress of the explanation. These principles motivated a revised version of the EDGE explanatory discourse generator [Cawsey 89, Cawsey 90], which we will call FUDGE (Fairly Understandable Discourse GEnerator). This paper introduces the FUDGE system. | |
650 | 7 | |a Bionics and artificial intelligence |2 sigle | |
650 | 4 | |a Discourse analysis |x Computer simulation | |
810 | 2 | |a Department of Artificial Intelligence: DAI research paper |t University <Edinburgh> |v 473 |w (DE-604)BV010450646 |9 473 | |
999 | |a oai:aleph.bib-bvb.de:BVB01-006964899 |
Datensatz im Suchindex
_version_ | 1804124884948746240 |
---|---|
any_adam_object | |
author | Cawsey, Alison |
author_facet | Cawsey, Alison |
author_role | aut |
author_sort | Cawsey, Alison |
author_variant | a c ac |
building | Verbundindex |
bvnumber | BV010451907 |
ctrlnum | (OCoLC)23725293 (DE-599)BVBBV010451907 |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02265nam a2200313 cb4500</leader><controlfield tag="001">BV010451907</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20020301 </controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">951026s1990 |||| 00||| eng d</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)23725293</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV010451907</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91G</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Cawsey, Alison</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Understandable explanations: the FUDGE discourse generator</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Edinburgh</subfield><subfield code="c">1990</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">9 S.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">University <Edinburgh> / Department of Artificial Intelligence: DAI research paper</subfield><subfield code="v">473</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Abstract: "This paper develops two principles for generating understandable explanations, and presents a simple computational system based on those principles. First, explanations should be planned so that, in principle, the concepts in them may be understood by the particular hearer. This involves making use of both general learning/teaching methods (such as by analogy or by example) and information about how different types of knowledge (such as procedures, objects or processes) may be described using these. Then, giving assumptions about the user's knowledge it should be possible to generate understandable explanations in a fairly principled manner</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Second, our model of the hearer's knowledge is very unlikely to be correct (though it may be a useful approximation). It is therefore vital to allow the user to signal whether or not they are following, and to ask clarification questions as necessary. In a complex explanation these may take place within the explanation, and influence the progress of the explanation. These principles motivated a revised version of the EDGE explanatory discourse generator [Cawsey 89, Cawsey 90], which we will call FUDGE (Fairly Understandable Discourse GEnerator). This paper introduces the FUDGE system.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Bionics and artificial intelligence</subfield><subfield code="2">sigle</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Discourse analysis</subfield><subfield code="x">Computer simulation</subfield></datafield><datafield tag="810" ind1="2" ind2=" "><subfield code="a">Department of Artificial Intelligence: DAI research paper</subfield><subfield code="t">University <Edinburgh></subfield><subfield code="v">473</subfield><subfield code="w">(DE-604)BV010450646</subfield><subfield code="9">473</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-006964899</subfield></datafield></record></collection> |
id | DE-604.BV010451907 |
illustrated | Not Illustrated |
indexdate | 2024-07-09T17:52:46Z |
institution | BVB |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-006964899 |
oclc_num | 23725293 |
open_access_boolean | |
owner | DE-91G DE-BY-TUM |
owner_facet | DE-91G DE-BY-TUM |
physical | 9 S. |
publishDate | 1990 |
publishDateSearch | 1990 |
publishDateSort | 1990 |
record_format | marc |
series2 | University <Edinburgh> / Department of Artificial Intelligence: DAI research paper |
spelling | Cawsey, Alison Verfasser aut Understandable explanations: the FUDGE discourse generator Edinburgh 1990 9 S. txt rdacontent n rdamedia nc rdacarrier University <Edinburgh> / Department of Artificial Intelligence: DAI research paper 473 Abstract: "This paper develops two principles for generating understandable explanations, and presents a simple computational system based on those principles. First, explanations should be planned so that, in principle, the concepts in them may be understood by the particular hearer. This involves making use of both general learning/teaching methods (such as by analogy or by example) and information about how different types of knowledge (such as procedures, objects or processes) may be described using these. Then, giving assumptions about the user's knowledge it should be possible to generate understandable explanations in a fairly principled manner Second, our model of the hearer's knowledge is very unlikely to be correct (though it may be a useful approximation). It is therefore vital to allow the user to signal whether or not they are following, and to ask clarification questions as necessary. In a complex explanation these may take place within the explanation, and influence the progress of the explanation. These principles motivated a revised version of the EDGE explanatory discourse generator [Cawsey 89, Cawsey 90], which we will call FUDGE (Fairly Understandable Discourse GEnerator). This paper introduces the FUDGE system. Bionics and artificial intelligence sigle Discourse analysis Computer simulation Department of Artificial Intelligence: DAI research paper University <Edinburgh> 473 (DE-604)BV010450646 473 |
spellingShingle | Cawsey, Alison Understandable explanations: the FUDGE discourse generator Bionics and artificial intelligence sigle Discourse analysis Computer simulation |
title | Understandable explanations: the FUDGE discourse generator |
title_auth | Understandable explanations: the FUDGE discourse generator |
title_exact_search | Understandable explanations: the FUDGE discourse generator |
title_full | Understandable explanations: the FUDGE discourse generator |
title_fullStr | Understandable explanations: the FUDGE discourse generator |
title_full_unstemmed | Understandable explanations: the FUDGE discourse generator |
title_short | Understandable explanations: the FUDGE discourse generator |
title_sort | understandable explanations the fudge discourse generator |
topic | Bionics and artificial intelligence sigle Discourse analysis Computer simulation |
topic_facet | Bionics and artificial intelligence Discourse analysis Computer simulation |
volume_link | (DE-604)BV010450646 |
work_keys_str_mv | AT cawseyalison understandableexplanationsthefudgediscoursegenerator |