Content determination
Encyclopedia
Content determination is a subtask of Natural language generation
, which involves deciding the on the information communicated in a generated text. It is closely related to Document structuring
NLG task.
Which of these bits of information should be included in the generated texts?
Perhaps the most fundamental issue is the communicative goal of the text, ie its purpose and reader. In the above example, for instance, a doctor who wants to make a decision about medical treatment would probably be most interested in the heart rate bradycardias, while a parent who wanted to know how her child was doing would probably be more interested in the fact that the baby was being given morphine and was crying.
The second issue is the size and level of detail of the generated text. For instance, a short summary which was sent to a doctor as a 160 character SMS text message might only mention the heart rate bradycarias, while a longer summary which was printed out as a multipage document might also mention the fact that the baby is on a morphine IV.
The final issue is how unusual and unexpected the information is. For example, neither doctors nor parents would place a high priority on being told that the baby's temperature was normal, if they expected this to be the case.
Regardless, content determination is very important to users, indeed in many cases the quality of content determination is the most important factor (from the user's perspective) in determining the overall quality of the generated text.
Schemas are templates which explicitly specify the content of a generated text (as well as Document structuring
information). Typically they are constructed by manually analysing a corpus
of human-written texts in the target genre, and extracting a content template from these texts. Schemas work well in practice in domains where content is somewhat standardised, but work less well in domains where content is more fluid (such as the medical example above).
Statistical techniques use statistical corpus analysis techniques to automatically determine the content of the generated texts. Such work is in its infancy, and has mostly been applied to contexts where the communicative goal, reader, size, and level of detail are fixed. For example, generation of newswire summaries of sporting events.
Explicit reasoning approaches have probably attracted the most attention from researchers. The basic idea is to use AI reasoning techniques (such as knowledge-based rules, planning, pattern detection, case-based reasoning, etc) to examine the information available to be communicated (including how unusual/unexpected it is), the communicative goal and reader, and the characteristics of the generated text (including target size), and decide on the optimal content for the generated text. A very wide range of techniques has been explored, but there is no consensus as to which is most effective.
Natural language generation
Natural Language Generation is the natural language processing task of generating natural language from a machine representation system such as a knowledge base or a logical form...
, which involves deciding the on the information communicated in a generated text. It is closely related to Document structuring
Document structuring
Document Structuring is a subtask of Natural language generation, which involves deciding the order and grouping of sentences in a generated text...
NLG task.
Example
Consider an NLG system which summarises information about sick babies. Suppose this system has four pieces of information it can communicate- The baby is being given morphine via an IV drop
- The baby's heart rate shows bradycardia's (temporary drops)
- The baby's temperature is normal
- The baby is crying
Which of these bits of information should be included in the generated texts?
Issues
There are three general issues which almost always impact the content determination task, and can be illustrated with the above example.Perhaps the most fundamental issue is the communicative goal of the text, ie its purpose and reader. In the above example, for instance, a doctor who wants to make a decision about medical treatment would probably be most interested in the heart rate bradycardias, while a parent who wanted to know how her child was doing would probably be more interested in the fact that the baby was being given morphine and was crying.
The second issue is the size and level of detail of the generated text. For instance, a short summary which was sent to a doctor as a 160 character SMS text message might only mention the heart rate bradycarias, while a longer summary which was printed out as a multipage document might also mention the fact that the baby is on a morphine IV.
The final issue is how unusual and unexpected the information is. For example, neither doctors nor parents would place a high priority on being told that the baby's temperature was normal, if they expected this to be the case.
Regardless, content determination is very important to users, indeed in many cases the quality of content determination is the most important factor (from the user's perspective) in determining the overall quality of the generated text.
Techniques
There are three basic approaches to document structuring: schemas (content templates), statistical approaches, and explict reasoning.Schemas are templates which explicitly specify the content of a generated text (as well as Document structuring
Document structuring
Document Structuring is a subtask of Natural language generation, which involves deciding the order and grouping of sentences in a generated text...
information). Typically they are constructed by manually analysing a corpus
Text corpus
In linguistics, a corpus or text corpus is a large and structured set of texts...
of human-written texts in the target genre, and extracting a content template from these texts. Schemas work well in practice in domains where content is somewhat standardised, but work less well in domains where content is more fluid (such as the medical example above).
Statistical techniques use statistical corpus analysis techniques to automatically determine the content of the generated texts. Such work is in its infancy, and has mostly been applied to contexts where the communicative goal, reader, size, and level of detail are fixed. For example, generation of newswire summaries of sporting events.
Explicit reasoning approaches have probably attracted the most attention from researchers. The basic idea is to use AI reasoning techniques (such as knowledge-based rules, planning, pattern detection, case-based reasoning, etc) to examine the information available to be communicated (including how unusual/unexpected it is), the communicative goal and reader, and the characteristics of the generated text (including target size), and decide on the optimal content for the generated text. A very wide range of techniques has been explored, but there is no consensus as to which is most effective.