Sometimes, even imperfect evidence can be useful to assess effectiveness of public investments in anti-poverty projects
Several livelihoods-focused community-drive development projects in India completed their first phase of operations in 2011. None of them had a credible impact evaluation. Despite this paucity of evidence, the project interventions were scaled up both within states and across the country through the National Rural Livelihoods Mission. This scale-up was based largely on anecdotal evidence and poorly designed program assessments. Two of the projects that were slated for significant scale-up were the Bihar Rural Livelihoods (JEEViKA) Project and the Tamil Nadu Empowerment and Poverty Reduction (Pudhu Vaazhvu) Project. In Bihar the project envisaged scaling up to all blocks of the 34 districts in the state, covering 12.5 million households. In Tamil Nadu, the project planned to scale up to cover almost 4 million households. As more rigorous (and forward-looking) evaluations could not be implemented in time to inform the expansion in these projects, there was a strong evaluation need for “quick and dirty” evidence on this project’s portfolio. The Social Observatory team used quantitative evaluations, which use matching methods to generate credible and first-time evidence on the impacts supported by this livelihoods approach. These evaluations are one component of a comprehensive learning system in both projects. They are part of a set of impact evaluations that include a more rigorous (randomized control trial– or regression discontinuity design–based) evaluation of their second phase, as well as separate impact evaluations that examine several subinterventions within these projects. This set of evaluations, along with insights from qualitative fieldwork and a series of behavioral experiments, will inform learning in both projects.