Hartley et al. analyzed the Abstracts, Introductions and Discussions of 80 journal articles in educational psychology (Every article was taken from Journal of Educational Psychology) to evaluate those readabilities. The computer-based style programs (the Linguistic Inquiry and Word Count program and Microsoft’s Office 97) were used to evaluate the overall readability of the text as well as of sentence lengths, difficult and unique words, articles, prepositions and pronouns. They adopted the Flesch Reading Ease score (R. E. score) which measures of readability, or text difficulty (R. E. Score 90-100: Very easy 10 years, 60-69: Standard 13-14 years, 0-29: Very difficult Graduate students). This score is calculated based on the lengths of the sentences and words. However, it couldn’t include readers’ motivation and appreciation of the genre which makes academic text readable. Many studies have shown that the R. E. score can be useful despite these defects.
The results showed that the Abstract (mean R. E. score 18.1) scored worst on most of these measures of readability, the Introductions (mean R. E. score 20.5) came next, and the Discussions (mean R. E. score 22.7) did best of all. However, although the mean scores between the sections differed, the authors wrote in stylistically consistent ways across the sections. Thus, readability was variable across the sections but consistent within the authors. They suggest that Abstracts are difficult to write because dense and complex material has to be fitted within a tight word limit. On the other hand, Discussion section requires authors only to comment on what they found and reported in Results section. However, this research is restricted to the data source. Further research is required on the single and multiple authoring in variety of disciplines.
No comments:
Post a Comment