The Multiple Intelligences Interest Inventory (MI3): Studies of Reliability and Validity
Main Article Content
Abstract
Preliminary analyses of reliability and validity of scores on the norm-referenced, 80-item Multiple Intelligences Interest Inventory (MI3) were conducted. The MI3 was designed to identify the intellectual interests and preferences of individuals aged 12 years and older in accord with Gardner’s theory of multiple intelligences in order to help people identify and understand their relative, self-perceived areas of strength and challenge. Three independent samples of 2,866, 258, and 99 participants were used to explore the score reliability and internal (construct) and external (concurrent) facets of validity. Exploratory and confirmatory factor analysis indicated eight correlated factors that underlie the 80 item test as predicted by multiple intelligences theory, although the fit of the model to the data was marginal. Score reliability (α range of .80-.88) and concurrent validity (rs > .50) were more than adequate for screening level purposes.   Â
Â
Keywords: multiple intelligences; factor analysis; concurrent validity; reliability
Downloads
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
References
Aliakbari, M., & Ghoreyshi, M. (2013). On the Evaluation of Master of Arts Program in Teaching English as a Foreign Language (TEFL) at Ilam University. European Online Journal of Natural and Social Sciences, 2(2s), pp-545.
Castillo, J.J. (2009). Convenience Sampling. Retrieved 10 Nov. 2012 from
http://explorable.com/convenience-sampling.html
Dalkey, N. C., & Helmer, O. (1963). An experimental application of the Delphi method to the use of experts. Management Science, 9(3), 458–467.
Dermot, F. M. (2000). Key concepts in ELT, Evaluation. ELT Journal, 54(2), 210-211.
Harris, W. (1968). The nature and function of educational evaluations. Peabody Journal of Education, 46(2), 95-99.
Hunkins, F. P., & Ornstein, A. C. (1998). Curriculum: Foundations, principles, and issues.
Karatas, H., & Fer, S. (2009). Evaluation of English curriculum at Yildiz Technical University using CIPP model. Egitim ve Bilim, 34(153), 47.
Kizlik, B. (2011). Measurement, assessment, and evaluation in education.
Oppenheim, A.N. (1992). Questionnaire design, interviewing and attitude measurement. London: Printer.
Ornstein, A. C., & Hunkins, F. P. (1993). Curriculum: Foundations, principles, and issues. Boston: Allyn and Bacon.
Owen, J.M. (1999). Program evaluation:Forms and Approaches. New York: Routledge. 3rd Edition.
Owston, R. (2008). Models and methods for evaluation. Handbook of Research on Educational Communications and Technology, 605-617.
Pollak, C. J. (2009). Teacher empowerment and collaboration enhances student engagement in data-driven environments. Teacher Empowerment and Student Engagement, 1, 1-47.
Rossi, P. H., & Freeman, H.E. (1993). Evaluation: A Systematic Approach. London ua: Sage.
Steele, S. M. (1970). Developing a Concept of Program Evaluation. Madison, Wisconsin: National University Extension Center.
Stufflebeam, D. L. (2003). The CIPP model for evaluation. In International handbook of educational evaluation (pp. 31-62). Springer Netherlands.
Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, & applications. San Francisco, CA: Jossey-Bass.
Stufflebeam, D. L., and Shinkfield, A. J. (1985). Systematic evaluation. Boston: Kluwe-Nijhoff.
Tessmer, M. (1993). Planning and conducting formative evaluations: Improving the quality of education and training. Psychology Press.
Thiede, W. (1964). Evaluation and Adult Education. Adult Education. ed. Gale Jensen et al. Adult Education Association of USA.
Tseng, K. H., Diez, C. R., Lou, S. J., Tsai, H. L., & Tsai, T. S. (2010). Using the Context, Input, Process and Product model to assess an engineering curriculum. World Transactions on Engineering and Technology Education. Vol, 8.
Wall, J. E. (2014). Program Evaluation Model. 9 – Step Process. Sage Solutions,1-26.
Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (2010). Handbook of practical program evaluation (Vol. 19). John Wiley & Sons.
Wall, J. E., & Solutions, S. (2014). Program evaluation model 9-step process. Sage Solutions.http://region11s4. lacoe. edu/attachments/article/34, 287(29), 209.
Yahaya, A. (2001). The using of model context. input, process and products (CIPP) in learning programs assessment. International Conference on Challenges and Prospects in Teacher Education.1-4.
Zhang, G., Zeller, N., Griffith, R., Metcalf, D., Williams, J., Shea, C., & Misulis, K. (2011). Using the context, input, process, and product evaluation model (CIPP) as a comprehensive framework to guide the planning, implementation, and assessment of service-learning programs. Journal of Higher Education Outreach and Engagement,15(4), 57-84.