Discussion Search Results



Search results in topics:

Viewing 15 topics - 31 through 45 (of 59 total)

Search results in replies:

  • Author
    Replies
  • #3854 Reply

    Fabiola Lara
    Keymaster

    The master IDELA training or what may also be called a training of trainers (ToT) is typically 4-5 days as outlined in the suggested agenda of the Master English materials. If additional trainings are anticipated to, for example, train enumerators, these trainings should follow the same format as the ToT to ensure fidelity of implementation.

    #3906 Reply

    Jonathan Seiden
    Keymaster

    Hi Sarah,

    Great questions! Let me address them one by one:

    For the household measure, is there a composite measure/index of responses that Save has found corresponds to a household environment that is conducive for child development?

    We don’t have a standardized way of measuring the overall household environment, but we often create individual indices for reading materials, toys, and home learning activities. We usually create indices for the “total number of types of X” reported by the caregiver. We can then generate variables for the total number of types of toys and reading materials (and sometimes combine this index to create the overall “Home Learning Environment”). With the home learning activities, we usually create a composite for “any” caregiver, and then also analyze it by the mother and father reported activities.

    Same question for socioeconomic status. Has Save the Children used IDELA-HE to create an index of SES?

    We take a similar approach to this issue as well. We can create a measure of “total number of types of household possessions” as a proxy for SES.

    Obviously, these approaches are a bit reductionist because we are assuming that all types of possessions, toys, reading materials, etc. contribute equally. You may want to use more sophisticated measures such as factor analysis to address these limitations. The trade off is then how to communicate your results. It’s relatively easy to understand that, on average, one additional reading material was associated with an X p.p. change in IDELA score. It’s more complicated when you condense variables with other methods. One option would be to use those other methods, but then rely on quantiles to communicate your results.

    • This reply was modified 5 years, 1 month ago by Jonathan Seiden.
    #4028 Reply

    Sherrilee Le Mottee
    Guest

    Hi, I working on a study on early childhood development and really interested in the use of the draw a person in IDELA, can you give me more information on how you are scoring it, please. Also your socio-emotional domain.
    thanks so much.
    Sherri Le Mottee

    #4068 Reply

    Frannie
    Guest

    Hi Sherrilee, Thanks for joining the conversation.

    The directions used to score the “drawing a person” task are fairly detailed. We give young learners points for the level of detail they complete when drawing the person. More detail is found in the IDELA administration guide, which is available to IDELA partners.

    Becoming an IDELA partner is free, and requires the completion of our MOU. You can learn more about the MOU, or complete it, here.

    For social emotional tasks, we ask young children to talk about emotional awareness, empathy, self awareness, solving conflict and their relationships with peers. This is done through discussion scenarios and pictures. For more detail, feel free to complete the MOU and gain access to the full IDELA tool, or email [email protected]. We look forward to hearing from you!

    #4093 Reply

    Fabiola Lara
    Keymaster

    Hi Adiel, great question. If you require TA support for the training of enumerators, please feel free to reach out to us with the timeline for your project and we would be happy to share costs related to providing that support.

    In terms of the training, it can be carried out in two ways: there can be a trainer of enumerators training (who will then cascade the training to actual data collectors) or a training of enumerators directly.

    Depending on the size of either training, one or two people can facilitate the training – ideally two people to ensure adequate support and mentoring to enumerators. Essentially, anyone collecting data (such as an enumerator/data collector) needs to receive the full training before collecting any data in the field to ensure reliable data.

    #4174 Reply

    Fabiola Lara
    Keymaster

    Once the MoU located on this site is signed, users have access to the tool itself (child-direct assessment), as well as other supplementary materials including the home environment tool and a set of health and hygiene items that may be added. Users can also access stimulus cards intended to be used during the assessment, an administration guide, and an adaptation-administration guide with notes on how to adapt.

    #4176 Reply

    Fabiola Lara
    Keymaster

    A few of the questions in the assessment are timed (two minutes for example) but in general, to complete the entire assessment, it can take anywhere from 20-30 minutes.

    #4193 Reply

    Fabiola Lara
    Keymaster

    The general rule for waiting for a child to respond is five seconds before the assessor repeats the question again. If the child still does not respond, then score the response as incorrect or “0.”

    #4195 Reply

    Fabiola Lara
    Keymaster

    If after you have asked a question, the child gets up and starts playing something else, and you have already called the child to come back and they say they are no longer interested then you will score their response as “999” meaning “refused to respond” or “skipped question. Children may also refuse to respond to a question in different ways, they may, in rare cases, start crying or verbally express that they no longer want to “play” anymore as they are seated – these are all responses that should be scored “999.”

    #4441 Reply

    Sarah Strader
    Participant

    I am interested in presenting findings from IDELA at CIES this year, and am looking for co-panelists. My organization is Two Rabbits, and we support indigenous communities to establish semi-formal early learning centers (in line with Cameroon’s policy of expanding ECE access through community preschool centers). We are especially interested in focusing our panel on strategies to support quality and measure development in semi-formal/nonformal preschool environments. Feel free to reach out if you’re interested in collaborating!

    #4443 Reply

    Frannie Noble
    Keymaster

    Thanks Sarah, this looks great! I pushed people to join the discussion in our webinar, and also in the newsletter released today.

    #4465 Reply

    Frannie Noble
    Keymaster

    Hi RL!

    While each use of IDELA is unique, we have a few other conversations that have discussed sample size. They might be interesting for you:

    Sampling

    Sample for an Impact Evaluation

    #4781 Reply

    Frannie Noble
    Keymaster

    (Response from Lauren Pisani with Save the Children. Reposted with permission.)

    Nice to hear from you! I do think that the benchmarks that we set out in that report are one helpful way to consider children’s development over time. Children in the ‘mastery’ group answer at least 3 out 4 questions on the assessment correctly, which suggests mastery of the content, whereas those in the ‘struggling’ group answer fewer than 1 in 4 questions correctly suggesting that they had difficulty meaningfully engaging in the content on the assessment.

    The advantages of this kind of indicator are that it’s easily interpretable by many audiences, and that you can easily track the proportion of children meeting one or both benchmarks to see change over time for a particular area or group of children. The disadvantage is that the benchmarks are most relevant to 6-year-olds or those who are on the cusp of enrollment in primary school. So if you want to incorporate evidence for young children, it would be useful to modify these benchmarks to be more age appropriate. Also, these benchmarks don’t take local curricula or standards into account so consider how you will share this information with local partners/government. (The alternative would be to create benchmarks that map to local curricula/standards, but then the disadvantage is they’re less comparable across your sites.)

    #4785 Reply

    Jonathan Seiden
    Keymaster

    This is a really important question that (while I was still at SC), we spent a lot of time thinking, discussing and, at least for me, worrying about.

    As Frannie and Lauren mentioned, it’s possible to set benchmarks at the 25% and 75% level to create three broad categories that generally show that children are able to do almost none, some, or almost all of the IDELA activities correctly. However, this comes with a couple of limitations. First of all, we should be *very* careful comparing domains against each other–25% on Motor does not indicate the same level of development as 25% on Emergent Literacy. Second of all, we need to be very cautious about interpreting the results because they are not age-adjusted–of course younger children are going to be considered “struggling” more often than older kids, even when they may be developmentally very much on track.

    Also as Lauren alludes to, the ideal benchmarking situation will consist of (unfortunately) a much more involved process, but also one that will yield much more informative and culturally and policy relevant information. That process, simply stated, involves convening a panel of experts in a given context, reviewing the individual activities that children are asked to do on the assessment, and coming to a consensus of what children would be expected to be able to do at various ages. After deciding on the individual activities, these can be

    The biggest advantage of this approach is that it generates conceptually comparable benchmarks for different domains and ages of children–you might find that a score of 50% on Motor would generate worry for a 6-year-old, but that on the Emergent Literacy Domain (which is substantially “harder”) would be considered on-track.

    This non-statistical benchmarking approach is generally known as “Policy linking” and there are a variety of approaches that can be used. When done in a systematic way across contexts, it can yield a much more nuanced and informative benchmark.

    #4791 Reply

    Lela Chakhaia
    Participant

    Usually you would consider them the same as ‘incorrect’ (when calculating domain scores). In certain cases some users might want to do a separate analysis of missing answers/refusals.

Viewing 15 replies - 31 through 45 (of 70 total)
Questions about IDELA or need further information?