A revised bloom's taxonomy evaluation of formal written language test items

Main Article Content

Yulis Setyowati
Susanto Susanto
Ahmad Munir

Abstract

This paper aims to portray the appropriateness of test items in language tests according to Bloom's Taxonomy. Thirty written language tests created by EFL lecturers were analyzed. Document analysis was applied, the data were categorized and examined. In the test for remembering, ‘crucial questions was applied, finding specific examples or data, general concepts or ideas, and abstracting themes in comprehension test. Completing particular projects or solve issues in the applying test, whereas SWOT analysis conducted in analyzing test, and strategic plan should be demonstrated in evaluation test, and last, in creating test, new things or idea should be created, generalizing and make conclusion.    The findings demonstrated test item using remembering mental level stood at 66%, understanding 16%, applying 2%. While analyzing level gets 9%, evaluating 2%, and creating group 5%. This addresses disparity between LOTs and HOTs usage. Hence, Bloom taxonomy was not distributed well in the language tests.


Keywords: test items, formal written language tests, Revised Bloom taxonomy

Downloads

Download data is not yet available.

Article Details

How to Cite
Setyowati, Y., Susanto, S., & Munir, A. . (2022). A revised bloom’s taxonomy evaluation of formal written language test items. World Journal on Educational Technology: Current Issues, 14(5), 1317–1331. https://doi.org/10.18844/wjet.v14i5.7296
Section
Articles