Developing Information Skills Test for Malaysian Youth Students Using Rasch Analysis

This study explored the psychometric properties of a locally developed information skills test for youth students in Malaysia using Rasch analysis. The test was a combination of 24 structured and multiple choice items with a 4-point grading scale. The test was administered to 72 technical college students and 139 secondary school students. The data from the test were fitted to the Rasch partial credit model using the Winsteps program in which the unidimensionality, reliability and person-item distribution map of the test were examined. The analysis showed all 24 items meet the Rasch model expectation and thus have a potential in assessing information skills of youth students in Malaysia. The findings showed that Rasch analysis could help researchers to refine the developed test in a systematic and informed manner.


Introduction
, the Association of College and Research Library (2000), and the Society of College and Research Libraries (1999) described information skills as a sets of abilities to identify the need for information, and search, retrieve, assess, organize, analyze and synthesize information from various sources in order to develop noble understanding, and use the understanding to serve the information need.UNESCO (2006) also recognized information skills as an extension to reading, writing and arithmetic skills that enable communities and their members to function and progress.In the context of today knowledge society, information skills are essential skills to the 21 st century and lifelong learners (Bundy, 2004).
In Malaysia, the development of Smart Schools in 1999 was identified as a catalyst for the development of information skills among school students.Among other, the missions of Smart Schools were to develop national labour force that is literate in information and communication technology (Smart School Project Team, 1997).Such missions were accomplished by facilitating classroom learning with multiple tools and applications of information communication and technology.Workings on a similar platform, Malaysian universities run various informations skills programs to help students to become information literate (Edzan & Mohd-Saad, 2005;Chan, 2003;Mohd-Saad & Awang, 2002).These programs were part of the Malaysian National Information Technology Agenda established in 1996 to facilitate the development of knowledge workers by the year 2020 (Chan, 2003).The knowledge workers have the abilities to "acquire, apply, synthesize and create knowledge" (Economic Planning Unit, 2001) essential for the development and sustainance of knowledge economy.Underpinned by the generation and utilization of knowledge, knowledge economy requires special kind of workers who able to create, innovate, generate, and exploit new ideas through acquiring, applying, synthesizing, and creating knowledge, as well as apply technology and exercise superior entrepreneurial skills (Economic Planning Unit, 2001, 2002, 2006).Recently, the development of information literate graduates has become the focus of higher learning institutions in Malaysia when the Malaysian Qualification Framework (Malaysian Qualifications Agency, 2007) was introduced in late 2006.Among others, the framework identified information skills as a series of higher learning outcomes that students need to demonstrate and later measured upon across their university studies.The Ministry believed that student demonstration of these skills would help local university graduates to become competitive in the national and international job markets.The need for institutions of learning in Malaysia is to develop students with information skills at all levels of education, The development of reliable tools to measure student information skills could be viewed as facilitating the implementation of the national educational framework and ultimately the development of knowledge workers in Malaysia.

Information Skills
Although information skills and information literacy often discussed interchangeably; information literacy is more commonly used in the United States, whereas information skills is more commonly used in Great Britain, Australia, and New Zealand (Joint Information Systems Committee, 2002).Moreover, the Society of College National & University Libraries (1999) and the Chartered Institute of Library and Information Professionals (2004) suggested that information literacy is the goal of information-literate individuals, while information skills are the means to achieve the goal.For example, the Chartered Institute of Library and Information Professionals (2004) stated that "information literacy" is an understanding of: "A need for information, the resources available how to find information, the need to evaluate results, how to work with or exploit results, ethics and responsibility of use, how to communicate or share your findings, and how to manage your findings" (Chartered Institute of Library and Information Professionals, 2012, p. 1).In this sense, information literacy requires individuals to master several information-related skills such as abilities to identify the need for information and resources available, and find, evaluate, use or exploit, communicate and manage information in an ethical and responsible manner.Due to a fine line found between the usage of "information skills" and "information literacy," this study used the terms "information skills" and "information literacy" interchangeably as advocated by the Joint Information Systems Committee (2002)."Information literacy" has been criticized because it misleadingly conveys the meaning of basic reading and writing literacy of printed materials (e.g., Harris & Millet, 2006;Marcum, 2002), and highlights the conventional technology or environment for accessing and retrieving information and its sources (e.g., Bundy, 2002;Fryer, 2005).Alternative terms such as 'information fluency', 'sociotechnical fluency', 'digital literacy', and 'e-literacy' were introduced in literature to replace information literacy.For example, Harris and Millet (2006) advocated for information fluency because 'fluency' fits the requirement of learning outcomes, objectives, and assessment in education.On the other hand, Marcum (2002) suggested the term "information literacy" to be replaced by "sociotechnical fluency" which conveys a concept of compounded skills that cover "the visual, the interactive, and the cultural domains" of knowledge construction process that better reflect the current social and psychological aspects of learning.(Marcum, 2002) Likewise, to reflect the growing usage of information communication and technology as mediums of teaching and learning, various terms such as digital literacy (e.g., Bundy, 2002;Fryer, 2005) and e-literacy (e.g., Aberton, 2006;Badger & Roberts, 2005) were introduced into literature to differentiate information access, use, and communication within digital/electronic/wireless and conventional learning environments.In this regard, Fryer (2005) defined digital literacy as a set of "abilities to appropriately access, validate, synthesize, and utilize both analog and digital information sources to achieve a defined purpose", which "includes the abilities to communicate and collaborate effectively with information, transforming it into knowledge through a process of authentic and contextual utilization" (pp.7-8).On the other hand, e-literacy is considered to be "information technology literacy [that] underpins information literacy attainment" (Badger & Roberts, 2005, p. 28) and better suits "information literacy in an age of digital information" (Beeson, 2006, p. 210).
Despite the use of different terms for information literacy, literature agreed that there are several desired outcomes for information literate students in higher education.For example, the Australian and New Zealand Institute for Information Literacy (Bundy, 2004), the Association of College and Research Libraries (2000), and the Society of College National & University Libraries (1999) indicated that information literate students must be able to acquire and demonstrate the abilities to define information need; identify and select information sources; retrieve information and its sources; evaluate information and its sources; record, analyze and organize information and its sources; synthesize information to construct understanding; use the understanding to attain a specific goal; communicate and validate the understanding and goal; and understand ethical, personal, and social issues of accessing and using information.Moreover, the available literature often associates information skills with information technology, library, and critical thinking skills.For example, Harris and Millet (2006) asserted that a combination of "technological literacy, information literacy, and critical thinking" (p.533) could promote lifelong learning that sustains a knowledge society.Likewise, Catts and Lau (2008) argued that a combination of information, media, basic oral communication, and reasoning literacy could lead to the development and running of such a society.On the other hand, Bundy (2004) suggested that information skills are made up of interrelated skills identified as critical thinking, computer, library, and learning skills, which was also posited by Humes (1999) and Marcum (2002).Similarly Bruce's (1997) conception of information literacy was underpinned by the use of information technology, library, and critical thinking skills.

Problem Statement
Although there are a few studies in Malaysia that developed instruments to measure information skills of higher education students (e.g., Abang-Ismail & Pui, 2006), most of them measured the information skills using students' perceived performance.Although these studies were useful to generally describe the level of students' information skills in Malaysia, they were insufficient to inform students' performance of information skills.This is particularly true when it is well known in education that students' perceived performance is not necessarily a good indicator for students' measured performance (e.g., Grant, Malloy, & Murphy, 2009;Sarrico, 2010).In this regard, previous studies indicated that there is an effort to do so.Yet most of these studies, except for a few (e.g., Cameron, Wise, & Lottridge, 2007;Mercy, Newby, & Peng, 2011), do not examine the psychometric properties of the developed tests.Without the psychometric information, no information about the validity and reliability of the developed tests are available to support further refinement of the tests for future studies.On the other hand, existing studies focus on measuring information skills of higher education students, leaving a gap in understanding information skills of school and college students who would later be enrolled in higher education institutions.In this regard, identifying and understanding information skills of school and college students prior their entrance to the university could help schools, colleges and universities to work in a partnership to prepare the students for their university and lifelong learning.Therefore, this study aimed to develop information skills assessment test (ISAT) for secondary school and college students in Malaysia and examined the psychometric properties of the developed test.

Method
The study involved four phases of test development.During the first phase of the test development, the study identified six underlying constructs of information skills from information literacy standards for higher education by Bundy (2004), Association of College and Research Libraries (2000), and Society of College National & University Libraries (1999).The constructs comprised of the ability to identify information need, and to search, evaluate, organise, and ethical use information and its sources in order to develop, use and communicate personal understanding.Similar constructs also emerged in previous qualitative studies that explored information skills in Malaysian higher education (i.e., Karim et al., 2004;Karim et al., 2010;Karim et al., 2011).Likewise, except for ethical use of information and its sources, previous quantitative studies by Abang-Ismail and Pui (2006), Abdullah et al. (2006) and Edzan (2007) also used similar components to assess information skills of Malaysian university students.In the second phase of the test development, the study developed 30 structured and multiple choice items to measure the six identified information skills constructs.The study also employed three teachers, two librarians and four postgraduate students in a Malaysian university to review the test items.Six items were excluded from the test because the reviewers found the items redundant and ambiguous.Leaving with 24 revised test items, the study prepared rubric to facilitate the scoring process of item responses in the third phase of the study.The items were scored continuously from 0 to 4 indicating an employment of an interval scaling from wrong to correct answer.The interval scaling was employed in the study because the study was interested to measure various levels of students' information skills.During the fourth stage of the study, the second version of the test and scaling responses were reviewed by seven teachers, librarians, and students; some of them were present during the first review.The second review had led to the revision of numbering and wording of existing items to indicate further the stages of information skills as identified in the information literacy standard for higher education.The final constructs and items for the test are shown in Table 1.(Rasch, 1980).The model further enables the tabulation of item (I) of a given task, and person (P) on a distribution map (PIDM) that uses a common scale known as logit.In this sense, Rasch analysis allows a more accurate estimate of a person's ability on a linear scale of measurement and thus provides an exploratory depth in understanding the ability of each person in relation to every item difficulty.Similar to the parameters of the Item Response Theory (IRT), the Rasch Measurement Model assists the instrument validation process by ensuring: (1) item statistics are independent of sample size, (2) respondents' scores are independent of item difficulty, (3) item analysis does not require strict parallel tests to interpret item reliability, (4) item analysis locates items according to respondents' ability, and (5) item analysis maps test items and respondents' ability on a linear scale of measurement (Schumacker, 2005;Hambleton & Swaminathan, 1985).In this study, the psychometrict properties of the developed test that were examined are its unidimensionality, reliability, and item-person distribution.

Results and Discussion
This section discussed the profile of respondents, and the unidimensionality, reliability, and item-person distribution of the developed test.

Profile of the Respondents
The respondents involved in the study were 211 students; comprising of 72 technical college students and 139 upper secondary students.As illustrated in Table 2, most of the respondents in the study were secondary school students (65.9%) and male (57.3 %).Additionally, a majority of the respondents have a cumulative grade point average of 3.00 to 3.59 (37 %) and aged from 15-18 years old (68.2 %).

The Unidimensionality of the Test
A unidimensional model is where there is "ONE latent variable of interest and the level of this latent variable is the focus of measurement" (Wu & Adam, 2007).Without any implicit assumption of unidimensionality, there is no meaning we could attach to our aggregated test scores; rendering them to become un-interpretable.In Rasch Model, "unidimensional measurement" refers to the non-random variance found in the data that could be explained by a single dimension of difficulty and ability" (Sick, 2010).There are several tools available for assessing the psychometric unidimensionality in Rasch analysis such as the item Point Measure Correlation, and infit and outfit Mean Square fit statistics and the Principal Component Analysis of Residuals.Act as an early detection of construct validation, item polarity was first checked against the Point Measure Correlation.As illustrated in Figure 1, all items have positive Point Measure Correlation value, suggesting that there is no difference in the direction of items and constructs that they are measuring (Bond & Fox, 2007).The study further verified the items at OUTFIT column for Mean Square value; where MnSq = y, 0.5 < y < 1.5 is identified as productive measurement (Lineacre & Wright, 1994).Figure 1 indicated that all items have a MnSq value within this parameter.Further check was also conducted on the Z-Std value, where the Z-Std = z, -2 < z < +2 is considered as acceptable range (Bond & Fox, 2007).Figure 1 showed that items S10.EVAL and S15.SOUR have a Z-Std value of 3.2 and 4.8 respectively which are beyond the acceptable range.The situation suggested that the mean-square values of the items tell us that "these data fit the Rasch model usefully", while the Z-Std values tell us "but not exactly" (Lineacre, 2012), suggesting that these items are useful to the measurement but require further refinement.Looking back at the developed test, item S10 assesses students' knowledge on evaluating information sources while S15 assesses students' knowledge on types of information sources.Both items also constitute multiple sub-items that may be revised in the future into a set of individual items in the test.The study employed further the Principal Components Analysis of Rasch residuals that aims "to extract the common factor that explains the most residual variance under the hypothesis that there is such a factor" (Lineacre, 1998).As showed in Figure 2, the test has 24 active items with poor variance in data explained by measures at 43.1% (Fischer, 2007).The analysis also indicated that the item measure (31.9%) are more dispersed than the person (11.1%).Since this is an exploratory study, the study was expected to discover multiple levels of students' information skills, and thus it was not convincing to see that there is only little dispersion of student abilities (11.1%).Accordingly the study expects that by re-administrating the test to a larger number of sample from multiple strata of abilities would increase the dispersion and thus the variance explained by the measures.
Figure 2 also indicates that the eigenvalue of the unexplained variable in the first contrast is 3.4 (8.1%); suggesting that there is secondary dimension with the strength of about three items out of 24 items.Although an instrument with 5-10% of unexplained variance in contrast to 1-5 of PCA of residuals is rated as good instrument (Fischer, 2007), a further examination was conducted on the standardized residual loadings for items (sorted by loading) in the constrast one of the Principal Component Analysis of Residuals.S8.CONFER: Who are the persons that you could discuss your information searching with?Why do you choose to discuss the information searching with them?
Analyzing the three items together, the study found that all the three items shared a similar characteristic; requiring students to give their own justification on multiple ways of obtaining and using information sources to complete an academic report writing.In this light, the study established that there is no secondary dimension in the test.Instead, the unexplained variance is merely represent a content strand found in information skills; i.e. obtaining and using multiple information sources for specific purpose.

Reliability of the Test
Figure 3 shows that the reliability of item difficulty estimates is high (.99).The item separation index of 8.41 indicated that the items can be separated into eight strata of difficulty ranging from very easy to very difficult.The high item reliability of .99 also indicated that if the items were given to other comparable cohorts of students, there is a high probability that the test is able to reproduce this order of item hierarchy along the measured variable.With regard to person measures, the reliability coefficient of 0.88 is considerably good from a scale from 0 to 1 (Fischer, 2007), while the person separation index of 2.66 indicated that the students can be separated into more or less three ability strata of low-skilled, medium-skilled and high-skilled.The study expects that the separation index will be bigger if the test is given to a bigger number of samples.Responses from the respondents were run through Winsteps to attain logit values for item difficulty and person ability on the same continuous measurement unit that allow them to be compared against each other.Figure 4 shows the Person-Item Distribution Map (PIDM) that plotted person and item measures against the same logit scale for a quick and clear view of the correlation between them.Figure 4 shows that the person mean (-.18) is located just below the item mean which is constrained to 0.00; indicating that on the average that students' ability just below the item difficulty.This result suggests that on the average, the developed test items are a bit difficult in relation to the ability of information skills of this particular cohort of students.Comparing both person ability and item difficulty on the same interval scale, Figure 4   .99-2.59, 2.60-2.99 and 1.99-2.59.The finding is interesting because these students, except for Student 164, scored a medium to high CGPA point average and yet performed badly in the information skills test.One of the possibilities to this observation is that students apply little or no information skills within their classroom learning and assessment.Furthermore, the finding further suggests that the seven students require special sessions to expose them to information skills prior training them to apply information skills across their classroom learning.

Limitation and Implication
The purpose of this study was to develop a local information skills test for school and college students in Malaysia, known as ISAT, and to investigate the unidimensionality, reliability and person-item distribution of the test.The findings indicated that all 24 active items in ISAT displayed an acceptable fit to the Rasch Measurement Model and thus may be useful in assessing information skills performance of youth students in secondary schools and colleges in Malaysia.However, the findings suggested that the construct validation of ISAT could be increased by revising its multiple sub-items into individual test items.The findings also showed that ISAT could be a better-targeted instrument to measure information skills of Malaysian youth students if the test introduces very difficult and easy test items that calibrate well with the highest and lowest levels of students' ability located on the person scale.The findings also showed that ISAT has a low value of variance in the data explained by measures, indicating that there is a need to re-administer ISAT to a larger number of samples with multiple strata of abilities and background, such as students from private and mainstream secondary schools, and non-technical colleges.By doing so, it is expected that the value of dispersion of person measure, and thus the value of variance explained by measures of ISAT will increase significantly.

Conclusion
While a lot of tests have been developed to measure information skills, they mainly focused on university students and provided insufficient information on psychometric properties of the developed tests.This study explored validity and reliability of a locally developed test that measures information skills of secondary and college students in Malaysia.The study employed the Rasch Measurement Model to explore fit statistics and unidimensionality of information skills test items.All test items displayed an acceptable fit to Rasch Measurement Model, and assessed a unidimensional construct as indicated by the PCA of the Rasch residuals.
The findings suggested that the developed test has a potential to become an assessment tool for teachers and librarians in Malaysian schools and colleges to identify students' information skills level and appropriate strategies for students' acquisition of information skills.

Figure 1 .
Figure 1.Item polarity of the developed test

Figure 2 .
Figure 2. The principal components analysis of Rasch residuals of the developed test

Figure 3 .
Figure 3. Person and item reliability coefficients of the developed test shows that a student (Student 54) has the highest ability logit unit of 1.88 and located at the highest end on the person scale, while a student (Student 188) has a lowest ability logit unit of -5.46 and located at the lowest end on the person scale.The distribution map also shows that the spread of item measure ranges from a maximum logit value of +1.30 (Item S10 EVAL) and a minimum logit value of -1.46 (Item S1 G.EXP), indicating that item S10 EVAL and Item S1 G.EXP are the most difficult and easiest items in the developed test respectively.

Figure 4 .
Figure 4. Person-item distribution map of the developed test

Table 1 .
Constructs and items of the developed testAs part of a preliminary study, ISAT was administrated to 145 upper secondary school students and 78 college students.The raw scores were recorded and analyzed using SPSS program for Windows version 11 and later for partial credit model in Rasch Measurement Model using Winsteps program version 3.69.1.11.The Rasch Measurement Model is a measurement model that takes into account the ability levels of each person who responded to the test as well as the difficulty levels of each test item