Automatically Evaluating Content Selection in Summarization Without Human Models

Loading...
Thumbnail Image
Penn collection
Departmental Papers (CIS)
Degree type
Discipline
Subject
Computer Sciences
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Louis, Annie
Contributor
Abstract

We present a fully automatic method for content selection evaluation in summarization that does not require the creation of human model summaries. Our work capitalizes on the assumption that the distribution of words in the input and an informative summary of that input should be similar to each other. Results on a large scale evaluation from the Text Analysis Conference show that input-summary comparisons are very effective for the evaluation of content selection. Our automatic methods rank participating systems similarly to manual model-based pyramid evaluation and to manual human judgments of responsiveness. The best feature, Jensen- Shannon divergence, leads to a correlation as high as 0.88 with manual pyramid and 0.73 with responsiveness evaluations.

Advisor
Date of presentation
2009-08-01
Conference name
Departmental Papers (CIS)
Conference dates
2023-05-17T07:17:00.000
Conference location
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Louis, A. & Nenkova, A., Automatically Evaluating Content Selection in Summarization Without Human Models, Conference on Empirical Methods in Natural Language Processing, Aug. 2009, doi: http://www.aclweb.org/anthology/D09-1032
Recommended citation
Collection