An architecture facilitating repair and replanning in interactive explanations:
Abstract: "This paper describes a planning architecture which allows interruptions, checking moves which ask whether an explanation is understood, and repair and replanning strategies. We define repair as recovering from some misconception or missing information which has prevented an explanati...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buch |
Sprache: | English |
Veröffentlicht: |
Edinburgh
1990
|
Schriftenreihe: | University <Edinburgh> / Department of Artificial Intelligence: DAI research paper
478 |
Schlagworte: | |
Zusammenfassung: | Abstract: "This paper describes a planning architecture which allows interruptions, checking moves which ask whether an explanation is understood, and repair and replanning strategies. We define repair as recovering from some misconception or missing information which has prevented an explanation from being understood, whether the repair is initiated by the explainer or the explainee. Repair by the explainer is implemented using Moore's [4] method of inspecting the prerequisites of faulty plans. Repair by the explainee involves allowing restricted interruptions of the explanation. Replanning involves abandoning the current explanation and restarting with a different approach It is appropriate where the cause of an explanation's failure can not be found or when repair is thought to be more costly than beginning again. Since humans are able to use repair and replanning to fix interactive explanations, explanation systems should take these strategies into account. Peachey and McCalla [5], Moore [4], and Cawsey [1] all have systems which can replan some part of an explanation. However, their systems always choose to replan as little of the explanation as possible, and will only replan when every other means of providing an explanation has failed. Human explanations appear to allow replanning more generally, depending on the estimated difficulties of possible courses of action The system described can replan any unfulfilled discourse goal at any planning step and decides which recovery strategies to employ by evaluating the different possibilities. The replanning can also be restricted by specifying which kinds of goals and planning steps may be replanned. The planner is being implemented in a domain where one computer process explains to another in an artificial language how to navigate around a simple map. |
Beschreibung: | 8 S. |
Internformat
MARC
LEADER | 00000nam a2200000 cb4500 | ||
---|---|---|---|
001 | BV010452201 | ||
003 | DE-604 | ||
005 | 00000000000000.0 | ||
007 | t | ||
008 | 951027s1990 |||| 00||| engod | ||
035 | |a (OCoLC)23725267 | ||
035 | |a (DE-599)BVBBV010452201 | ||
040 | |a DE-604 |b ger |e rakddb | ||
041 | 0 | |a eng | |
049 | |a DE-91G | ||
100 | 1 | |a Carletta, Jean |e Verfasser |4 aut | |
245 | 1 | 0 | |a An architecture facilitating repair and replanning in interactive explanations |
264 | 1 | |a Edinburgh |c 1990 | |
300 | |a 8 S. | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 1 | |a University <Edinburgh> / Department of Artificial Intelligence: DAI research paper |v 478 | |
520 | 3 | |a Abstract: "This paper describes a planning architecture which allows interruptions, checking moves which ask whether an explanation is understood, and repair and replanning strategies. We define repair as recovering from some misconception or missing information which has prevented an explanation from being understood, whether the repair is initiated by the explainer or the explainee. Repair by the explainer is implemented using Moore's [4] method of inspecting the prerequisites of faulty plans. Repair by the explainee involves allowing restricted interruptions of the explanation. Replanning involves abandoning the current explanation and restarting with a different approach | |
520 | 3 | |a It is appropriate where the cause of an explanation's failure can not be found or when repair is thought to be more costly than beginning again. Since humans are able to use repair and replanning to fix interactive explanations, explanation systems should take these strategies into account. Peachey and McCalla [5], Moore [4], and Cawsey [1] all have systems which can replan some part of an explanation. However, their systems always choose to replan as little of the explanation as possible, and will only replan when every other means of providing an explanation has failed. Human explanations appear to allow replanning more generally, depending on the estimated difficulties of possible courses of action | |
520 | 3 | |a The system described can replan any unfulfilled discourse goal at any planning step and decides which recovery strategies to employ by evaluating the different possibilities. The replanning can also be restricted by specifying which kinds of goals and planning steps may be replanned. The planner is being implemented in a domain where one computer process explains to another in an artificial language how to navigate around a simple map. | |
650 | 7 | |a Bionics and artificial intelligence |2 sigle | |
650 | 7 | |a Computer software |2 sigle | |
650 | 4 | |a Künstliche Intelligenz | |
650 | 4 | |a Artificial intelligence | |
650 | 4 | |a Planning | |
810 | 2 | |a Department of Artificial Intelligence: DAI research paper |t University <Edinburgh> |v 478 |w (DE-604)BV010450646 |9 478 | |
999 | |a oai:aleph.bib-bvb.de:BVB01-006965145 |
Datensatz im Suchindex
_version_ | 1804124885325185024 |
---|---|
any_adam_object | |
author | Carletta, Jean |
author_facet | Carletta, Jean |
author_role | aut |
author_sort | Carletta, Jean |
author_variant | j c jc |
building | Verbundindex |
bvnumber | BV010452201 |
ctrlnum | (OCoLC)23725267 (DE-599)BVBBV010452201 |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02979nam a2200361 cb4500</leader><controlfield tag="001">BV010452201</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">00000000000000.0</controlfield><controlfield tag="007">t</controlfield><controlfield tag="008">951027s1990 |||| 00||| engod</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)23725267</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV010452201</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rakddb</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-91G</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Carletta, Jean</subfield><subfield code="e">Verfasser</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">An architecture facilitating repair and replanning in interactive explanations</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">Edinburgh</subfield><subfield code="c">1990</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">8 S.</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="1" ind2=" "><subfield code="a">University <Edinburgh> / Department of Artificial Intelligence: DAI research paper</subfield><subfield code="v">478</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">Abstract: "This paper describes a planning architecture which allows interruptions, checking moves which ask whether an explanation is understood, and repair and replanning strategies. We define repair as recovering from some misconception or missing information which has prevented an explanation from being understood, whether the repair is initiated by the explainer or the explainee. Repair by the explainer is implemented using Moore's [4] method of inspecting the prerequisites of faulty plans. Repair by the explainee involves allowing restricted interruptions of the explanation. Replanning involves abandoning the current explanation and restarting with a different approach</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">It is appropriate where the cause of an explanation's failure can not be found or when repair is thought to be more costly than beginning again. Since humans are able to use repair and replanning to fix interactive explanations, explanation systems should take these strategies into account. Peachey and McCalla [5], Moore [4], and Cawsey [1] all have systems which can replan some part of an explanation. However, their systems always choose to replan as little of the explanation as possible, and will only replan when every other means of providing an explanation has failed. Human explanations appear to allow replanning more generally, depending on the estimated difficulties of possible courses of action</subfield></datafield><datafield tag="520" ind1="3" ind2=" "><subfield code="a">The system described can replan any unfulfilled discourse goal at any planning step and decides which recovery strategies to employ by evaluating the different possibilities. The replanning can also be restricted by specifying which kinds of goals and planning steps may be replanned. The planner is being implemented in a domain where one computer process explains to another in an artificial language how to navigate around a simple map.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Bionics and artificial intelligence</subfield><subfield code="2">sigle</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Computer software</subfield><subfield code="2">sigle</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Künstliche Intelligenz</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Artificial intelligence</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Planning</subfield></datafield><datafield tag="810" ind1="2" ind2=" "><subfield code="a">Department of Artificial Intelligence: DAI research paper</subfield><subfield code="t">University <Edinburgh></subfield><subfield code="v">478</subfield><subfield code="w">(DE-604)BV010450646</subfield><subfield code="9">478</subfield></datafield><datafield tag="999" ind1=" " ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-006965145</subfield></datafield></record></collection> |
id | DE-604.BV010452201 |
illustrated | Not Illustrated |
indexdate | 2024-07-09T17:52:46Z |
institution | BVB |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-006965145 |
oclc_num | 23725267 |
open_access_boolean | |
owner | DE-91G DE-BY-TUM |
owner_facet | DE-91G DE-BY-TUM |
physical | 8 S. |
publishDate | 1990 |
publishDateSearch | 1990 |
publishDateSort | 1990 |
record_format | marc |
series2 | University <Edinburgh> / Department of Artificial Intelligence: DAI research paper |
spelling | Carletta, Jean Verfasser aut An architecture facilitating repair and replanning in interactive explanations Edinburgh 1990 8 S. txt rdacontent n rdamedia nc rdacarrier University <Edinburgh> / Department of Artificial Intelligence: DAI research paper 478 Abstract: "This paper describes a planning architecture which allows interruptions, checking moves which ask whether an explanation is understood, and repair and replanning strategies. We define repair as recovering from some misconception or missing information which has prevented an explanation from being understood, whether the repair is initiated by the explainer or the explainee. Repair by the explainer is implemented using Moore's [4] method of inspecting the prerequisites of faulty plans. Repair by the explainee involves allowing restricted interruptions of the explanation. Replanning involves abandoning the current explanation and restarting with a different approach It is appropriate where the cause of an explanation's failure can not be found or when repair is thought to be more costly than beginning again. Since humans are able to use repair and replanning to fix interactive explanations, explanation systems should take these strategies into account. Peachey and McCalla [5], Moore [4], and Cawsey [1] all have systems which can replan some part of an explanation. However, their systems always choose to replan as little of the explanation as possible, and will only replan when every other means of providing an explanation has failed. Human explanations appear to allow replanning more generally, depending on the estimated difficulties of possible courses of action The system described can replan any unfulfilled discourse goal at any planning step and decides which recovery strategies to employ by evaluating the different possibilities. The replanning can also be restricted by specifying which kinds of goals and planning steps may be replanned. The planner is being implemented in a domain where one computer process explains to another in an artificial language how to navigate around a simple map. Bionics and artificial intelligence sigle Computer software sigle Künstliche Intelligenz Artificial intelligence Planning Department of Artificial Intelligence: DAI research paper University <Edinburgh> 478 (DE-604)BV010450646 478 |
spellingShingle | Carletta, Jean An architecture facilitating repair and replanning in interactive explanations Bionics and artificial intelligence sigle Computer software sigle Künstliche Intelligenz Artificial intelligence Planning |
title | An architecture facilitating repair and replanning in interactive explanations |
title_auth | An architecture facilitating repair and replanning in interactive explanations |
title_exact_search | An architecture facilitating repair and replanning in interactive explanations |
title_full | An architecture facilitating repair and replanning in interactive explanations |
title_fullStr | An architecture facilitating repair and replanning in interactive explanations |
title_full_unstemmed | An architecture facilitating repair and replanning in interactive explanations |
title_short | An architecture facilitating repair and replanning in interactive explanations |
title_sort | an architecture facilitating repair and replanning in interactive explanations |
topic | Bionics and artificial intelligence sigle Computer software sigle Künstliche Intelligenz Artificial intelligence Planning |
topic_facet | Bionics and artificial intelligence Computer software Künstliche Intelligenz Artificial intelligence Planning |
volume_link | (DE-604)BV010450646 |
work_keys_str_mv | AT carlettajean anarchitecturefacilitatingrepairandreplanningininteractiveexplanations |