Does IDELA have standard benchmarks?

  • Creator
    Topic
  • #4780 Reply

    Frannie Noble
    Keymaster

    (This questions was asked by Jana Torrico with Food for the Hungry and reprinted here with permission)

    In an IDELA report STC published (Beyond Access), one of the articles discussed and visualized the data in this way, with Mastering defined as scoring 75% or higher, Emerging defined as scoring 25-74%, and Struggling defined as scoring 24% and below.

    In discussion with our M&E team, we are considering moving to this structure for future assessments. However, I first wanted to get your input as to if STC still uses these categories, or if there is perhaps another type of classification being used as the standard?

Viewing 2 replies - 1 through 2 (of 2 total)
  • Author
    Replies
  • #4781 Reply

    Frannie Noble
    Keymaster

    (Response from Lauren Pisani with Save the Children. Reposted with permission.)

    Nice to hear from you! I do think that the benchmarks that we set out in that report are one helpful way to consider children’s development over time. Children in the ‘mastery’ group answer at least 3 out 4 questions on the assessment correctly, which suggests mastery of the content, whereas those in the ‘struggling’ group answer fewer than 1 in 4 questions correctly suggesting that they had difficulty meaningfully engaging in the content on the assessment.

    The advantages of this kind of indicator are that it’s easily interpretable by many audiences, and that you can easily track the proportion of children meeting one or both benchmarks to see change over time for a particular area or group of children. The disadvantage is that the benchmarks are most relevant to 6-year-olds or those who are on the cusp of enrollment in primary school. So if you want to incorporate evidence for young children, it would be useful to modify these benchmarks to be more age appropriate. Also, these benchmarks don’t take local curricula or standards into account so consider how you will share this information with local partners/government. (The alternative would be to create benchmarks that map to local curricula/standards, but then the disadvantage is they’re less comparable across your sites.)

    #4785 Reply

    Jonathan Seiden
    Keymaster

    This is a really important question that (while I was still at SC), we spent a lot of time thinking, discussing and, at least for me, worrying about.

    As Frannie and Lauren mentioned, it’s possible to set benchmarks at the 25% and 75% level to create three broad categories that generally show that children are able to do almost none, some, or almost all of the IDELA activities correctly. However, this comes with a couple of limitations. First of all, we should be *very* careful comparing domains against each other–25% on Motor does not indicate the same level of development as 25% on Emergent Literacy. Second of all, we need to be very cautious about interpreting the results because they are not age-adjusted–of course younger children are going to be considered “struggling” more often than older kids, even when they may be developmentally very much on track.

    Also as Lauren alludes to, the ideal benchmarking situation will consist of (unfortunately) a much more involved process, but also one that will yield much more informative and culturally and policy relevant information. That process, simply stated, involves convening a panel of experts in a given context, reviewing the individual activities that children are asked to do on the assessment, and coming to a consensus of what children would be expected to be able to do at various ages. After deciding on the individual activities, these can be

    The biggest advantage of this approach is that it generates conceptually comparable benchmarks for different domains and ages of children–you might find that a score of 50% on Motor would generate worry for a 6-year-old, but that on the Emergent Literacy Domain (which is substantially “harder”) would be considered on-track.

    This non-statistical benchmarking approach is generally known as “Policy linking” and there are a variety of approaches that can be used. When done in a systematic way across contexts, it can yield a much more nuanced and informative benchmark.

Viewing 2 replies - 1 through 2 (of 2 total)
Reply To: Does IDELA have standard benchmarks?
Your information:




Questions about IDELA or need further information?