David Curtis

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 6 of 6
  • Item
    The "Gap Year" in Australia: Incidence, Participant Characteristics and Outcomes
    (John Wiley & Sons, Ltd, 2014-04) Curtis, David D
    I report on the incidence of gap-taking-a year between secondary school and university-and find it has increased from about 10 per cent to almost 25 per cent of recent school-leaver university entrants. Gap-takers have lower school achievement scores than direct entrants. Non-metropolitan students are much more likely to take a gap year. I investigate evidence that gap-takers work in order to access the Youth Allowance benefit. Finally, I compare the course and career progression of gap-takers and non-gappers.
  • Item
    Research and national debate on Australian schooling
    (Shannon Research Press, 2006-11) Keeves, John Philip; Curtis, David D
    This paper is a response to the paper prepared by Masters that is titled 'The case for an Australian Certificate of Education'. It argues that a national debate is needed urgently on the many issues that have arisen in Australian education. These issues include not only the curriculum provided for students at the final stages of secondary schooling, and the certification of attainment of educational outcomes on completion of 12 years of schooling, but also the curriculum of schools across Australia, particularly at the lower and middle secondary school levels. In addition, there are related issues associated with participation in higher education and the completion of a first degree at an Australian university. All too often, decisions are made at all levels of education on ideological grounds and without consideration of the body of research findings that are available to guide the making of decisions and the monitoring of development and change. This paper draws on readily available research to show the similarities and differences between the state education systems to argue a case for informed debate that draws on the large body of evidence that is available.
  • Item
    Person misfit in attitude surveys : influences, impacts and implications.
    (Shannon Research Press, 2004-07) Curtis, David D
    This study of person fit in attitude surveys was undertaken in order to investigate the influence of the inclusion of misfitting persons on item parameter estimates in analyses using the Partial Credit extension of the Rasch measurement model. It was hypothesised that the inclusion of misfitting persons in data sets used for the calibration of attitude survey instruments might compromise the measurement properties of those instruments. Using both actual and simulated data sets, the inclusion of misfitting cases was found to reduce item variance. Several characteristics of both item and person samples were found to influence the proportion of cases identified as misfitting. These characteristics must be considered before removing cases that, according to customary practice, appear to misfit. The residual based misfit indicators that are commonly reported in Rasch analyses, the weighted and unweighted mean squares, appear not to have the generality over all instruments nor the precision required to make clear decisions on the retention or elimination of cases from samples, and there is a need to seek better misfit indicators. [Author abstract]
  • Item
    Computer adventure games as problem-solving environments
    (Shannon Research Press, 2002-11) Curtis, David D; Lawson, Mike Joseph
    Claims that computer-based adventure games are productive environments for the development of general problem-solving ability were tested in a study of 40 students' interactions with a novel computer-based adventure game. Two sets of factors that are thought to influence problem-solving performance were identified in the literature – domain-specific knowledge (schema) and general problem-solving strategies. Measures of both domain-specific knowledge and general strategy use were developed and applied in the study. A cognitive model to explain performance is developed in which there are complex relationships among key concepts. General strategies were found to have important influences on problem-solving performance, but schema was negatively related to performance. The implications of these findings for both classroom practice and future research designs are discussed. [Author abstract]
  • Item
    Misfits : people and their problems. What might it all mean?
    (Shannon Research Press, 2001-11) Curtis, David D
    In the analysis of data, which arise from the administration of multiple choice tests or survey instruments and which are assumed to conform to a measurement model such as Rasch, it is normal practice to check item fit statistics in order to ensure that the items used in the instrument cohere to form a unidimensional trait measure. However, checking whether individuals also fit the measurement model appears to be less common. It is shown that poor person-fit compromises item parameter estimates and so it is argued that person-fit should be checked routinely in the calibration of instruments and in scoring individuals. Unfortunately, the meanings that can be ascribed to person-fit statistics for attitude instruments is not clear. A proposal for seeking the required clarity is developed. [Author abstract]
  • Item
    The Course Experience Questionnaire as an Institutional Performance Indicator
    (Shannon Research Press, 2000-07) Curtis, David D; Keeves, John Philip
    Data from the 1996 Course Experience Questionnaire (CEQ) were analysed using the Rasch measurement model. This analysis indicates that 17 of the 25 CEQ items fit a unitary scale that measures course quality as perceived by graduates. Graduates are located on the interval measurement scale produced in the Rasch analysis. The interval nature of the scale renders the graduates' scores amenable to analyses that are not wisely employed using ordered raw CEQ scores. Analysis of variance indicates that variations in graduates' responses are attributable to field of study and institutional factors. In order to compare universities, corrections are made for the course mix of each institution to produce expected institutional scores. These are compared with observed institutional scores to determine those universities that have performed above, at, or below expectation. (Individual institutions are not identified in this analysis). Important issues relating to the educational and statistical significance of the findings have emerged. The data collected through the CEQ do not represent a simple random sample of all graduates. Instead, the data model is a hierarchical one, with individual graduates nested within courses, which are nested within institutions. This requires analysis using multilevel analytical tools. Conventional analyses substantially underestimate the standard errors of aggregated measures (such as institutional means) and therefore report institutional differences as significant when they are not. The implications of the measurement and analytical problems for policy decisions over the distribution of funding among institutions and among courses within institutions are discussed.