Designing and Using Values-Based Sustainable Development Indicators
1. What are values-based indicators, and why are they needed?
An indicator is an observable variable that represents an aspect of an intangible theoretical variable, thus enabling complex realities to be condensed and simplified (e.g. Gallopin, 1997; Gudmundsson, 2003). Indicators can provide information, for example, on how effectively a non-profit organization is operating: whether an ecosystem is deteriorating; and whether poverty is increasing or decreasing within a country. They also have conceptual uses - not merely reflecting, but helping to define, what is important in a society (Bahá’í International Community, 1998; Meadows, 1998). As any student revising for exams will affirm, it’s not enough to say that “something is measured because it matters”. Rather, things matter to people precisely because they are measured (c.f. Henshaw, 2006:58).
In this context, the lack of usable indicators for ethical/spiritual values such as ‘equity’, ‘moderation’, ‘respect’ and ‘unity’ is an issue of global concern. If these values cannot be measured, they may be treated by policy-makers as if they do not matter! Yet values, almost by definition, are the things that matter most to us: the behaviours or ideal states of existence that we prefer above the alternatives (Rokeach, 1973, 1979). Development professionals, resource managers and policy-makers might all agree, for example, that equity is preferable to discrimination, moderation to over-consumption, respect to rudeness, and unity to conflict. In the absence of ‘league tables’ for ethical values, however, they could be inadvertently compromised through policies aiming to boost GDP or meet ‘development’ targets.
2. How can shared understanding of value meanings be created?
The difficulty, in trying to create indicators for values, is that no objective definitions exist. Everyone has their own subjective understanding of what ‘equity’ or ‘moderation’ means, on the basis of personal experience in specific situations. Thus, in order to develop usable indicators, people need to use dialogue to build a shared understanding of the meaning of an ethical value-related word or phrase – which we term an intersubjective definition – within a particular local context. An international research project led by the University of Brighton has developed a methodology for achieving this, which we will describe in this paper.
Shared understanding can only be created when individuals:
(a) recognise that their perspective on a situation may not be the only one – that, in the words of an African proverb, “one head cannot contain all wisdom”;
(b) make a choice to engage with other people’s understandings, acknowledging that this will generate ‘added value’ (Talamo & Pozzi, 2011);
(c) engage in genuine dialogue at the interface between different perspectives, within a specific practical context.
We can distinguish ‘genuine dialogue’, in which there is mutual learning, from what Bakhtin (1981) terms ‘monologue disguised as dialogue’, in which one person regards their own views as superior and attempts to impose them on the others.
3. Creating values-based indicators: a step-by-step introduction
The process of designing indicators can be envisaged as consisting of six broad steps, examined in turn below:
- (a) Defining a shared context;
- (b) Collecting data on local understandings of values and how they are lived;
- (c) Identifying and prioritising `core’ values;
- (d) Identifying and prioritising draft indicators for the core values;
- (e) Field-testing the draft indicators;
- (f) Revising the indicators on the basis of field evidence.
3.1. Defining a shared context
Values-based indicators can be co-developed by any group of people that shares a specific practical context. This may be very specific, such as an individual non-profit organization; or it may be very general, such as a loose consortium of people involved with managing different natural resources who want to generate indicators that are broadly applicable across the whole sector. When the overall context has been defined, it will also be important to identify the stakeholders who will be involved in the project and determine how decisions will be made at each stage. Academic literature and previous practical experience suggest that the more deeply end-users participate in the research, at all stages of the project, the more benefit they will derive from the experience.
3.2. Identifying local understandings of values and how they are lived
In order to determine what is important to people within the specified context, it is important to elicit their insights in their own words rather than imposing a predefined framework. Thus, in contrast to conventional values measurement approaches based on fixed survey questions, our approach uses qualitative data collection methods.
Interviews, focus groups, open-ended surveys and document analysis may be used in any combination, according to the context itself. The primary data collection method is usually semi-structured interviews with individuals who have important roles in decision-making and strategic planning.
The term ‘semi-structured’ means that broad themes are decided in advance, but specific questions will be flexible – again, depending on the context and the interviewees. If ‘values’ are already a regular topic of conversation, the subject can be broached directly. Here are some examples of direct questions that might be used:
- What do you see as the core values of this organization / community?
- How are these values lived out in practice? -
- Can you give any examples?
- How would you know if these values were present?
If, however, values are rarely or never discussed in this setting, it may be more appropriate to start with indirect questions that do not refer to the word ‘values’ as such. Rather, they attempt to elicit values through personal narratives or descriptions of an imagined ideal. Some examples of indirect questions might be:
- Describe what you do on a daily basis. What gives you the most satisfaction?
- Think about one of your projects that was especially successful. What were some of the factors that made it work so well?
- Can you think of any occasions when your projects weren’t as successful as you felt they should be? What was missing? What could have been done differently?
- Imagine an ideal community / neighbourhood that you would regard as exemplifying ‘sustainability’. What is being sustained? How is this achieved?
- What have been the highlights, for you, of living in this community / neighbourhood? What makes it a pleasant place to live? What could make it better?
3.3. Identifying and prioritising ‘core’ values
Once a data set has been acquired, through interviews and other methods in any combination, it can be analysed by qualitative researchers to identify putative core values and draft indicators. Researchers use a technique called ‘coding’ – attaching small strings of characters to sentences or phrases, such as VAL to represent any value, CPN for `compassion’ or EQT for `equity’, respectively – to compare similar quotations across the whole data set. This can be done manually, or with the help of a computer-aided qualitative data analysis software (CAQDAS) package such as NVivo or Atlas.ti. Codes may be applied either when the term in question is explicitly mentioned in the text, or when it is implicit in a particular quotation, according to the researcher’s own judgement.
The data analysis step is likely to generate a very large number of value-labels (words and phrases used to describe values), so unless there is a substantial resource commitment for the project, it will probably be necessary to prioritise between three and ten specific ones for indicator development. This may be done in several ways:
i. Quantitative analysis: identifying the values that are mentioned most often (within the data set as a whole, or by particular individuals and/or sub-groups);
ii. Researcher consensus
iii. Stakeholder consensus (collective decision made, for example, by a sub-group of key informants, a representative sample of interviewees, or the senior management team for the project) These approaches may be combined, e.g. by using quantitative analysis to identify an initial shortlist which is then prioritised further through stakeholder consensus.
3.4. Identifying and prioritising draft indicators for the core values
Once the core values have been selected, draft indicators can be drawn out from the data set by searching for the respective codes and looking for expressions of “how these values are lived” or “what these values look like”. As in the case of values, prioritisation will be needed, depending on the maximum number of indicators that is felt to be practicable.
While in theory, any of the three different prioritisation approaches would be possible, the use of a stakeholder consensus approach is strongly recommended at this stage. This helps to ensure that the indicator set truly represents a shared understanding of how the stakeholders understand each of the chosen values, rather than just reflecting researcher biases in the data collection or analysis.
3.5. Field-testing the draft indicators
Field-testing will usually take place in several different settings, and be directed towards different research questions. Typically, the indicators are iteratively improved through what has been termed an “adaptive learning process” (Reed et al., 2006), in which the ‘community’ partners (potential users) focus on optimising the relevance, comprehensibility, measurability and usability of the indicators within their particular context, while the ‘expert’ partners (researchers) provide guidance on reliability and validity.
In order to field-test the indicators, it may be necessary to localize them by modifying the wording to suit the specific context in which they are being tested. Localization is an excellent strategy for enhancing the relevance of the indicators, but also makes it impossible to compare the results of different field trials directly, which may sometimes become problematic. The balance between localizability and generalizability will depend on specific circumstances.
Another aspect of field-testing is identifying suitable assessment methods through which data can be collected for each indicator. A mixed methods approach (combining at least one quantitative method, such as a survey or structured observation of behaviour, with at least one qualitative method, such as interviews or focus groups) is usually recommended. This is because the different methods can complement each other, with the quantitative method answering broad numerical questions such as ‘how many’ and ‘how much’, while the qualitative adds rich detail on ‘why’ and ‘how’. Mixed methods research has its own standards and procedures for data analysis, although it is beyond the scope of this introductory paper to dwell on them in detail. An important point, however, is that there is often a tension between scientific rigour on the one hand and usability on the other (Peterson, 2010). While it is, of course, essential to ensure that the conclusions drawn from the evaluation are valid ones, it is also important to guard against making it too complicated to be implemented with the available human and financial resources.
3.6. Revising the indicators on the basis of field evidence
On the basis of evidence from field trials, some of the indicators may be reworded, merged, split into their component parts, or deleted altogether. Such decisions would ideally be made through dialogue between the stakeholders (users) and researchers. Steps 3.5 and 3.6 may be repeated several times, thus improving the indicators in an iterative way by increasing their relevance, comprehensibility, measurability, usability, validity or reliability. For example, data from a second round of field trials could generate a third set of draft indicators, which in turn would be tested and revised again to derive the final list.
4. The benefits of using values-based indicators
Using a ‘menu’ of localizable values-based indicators can result in transformational learning for resource managers, policy-makers, local community members, and even the consultants carrying out the field visits. In particular, the indicators have been shown to generate new, shared understandings of values, indicators and assessment methodologies. Values that were previously abstract in nature, and understood in different ways by different individuals, can be operationalized and measured by localizing specific indicators from the menu and developing appropriate assessment tools. This can help to reduce the likelihood of serious misunderstandings about their meanings. Other previously observed benefits of using values-based indicators include the following:
- Increased understanding and acceptance of oneself, other individuals, and the group as a whole (e.g. enhanced awareness of people’s motivations, commitment and behaviour)
- ‘Eureka moments’ – instant insights into how a situation could be improved - Improvements in ‘buy-in’, commitment, motivation and morale - Individual behaviour change and transformation of group dynamics
- Changes to general strategic planning, and / or to specific protocols for monitoring and evaluation (M&E), internal communications and training
- Changes in the way in which values-related processes and outcomes are communicated beyond the organization / institution itself
- Resulting transformation of relationships with other organizations / institutions / individuals, including current and potential donors, partners and clients
5. Evolving a community of practice
We would anticipate that, rather than being centrally planned and administered, any effort to scale up the application of values-based indicators in sustainability initiatives would need to be led from the grassroots. As intersubjectivity (the construction of shared understanding) is tied to specific practical contexts, the most appropriate model might be a ‘community of practice’ in which different institutions design and field-test their own indicators, using the same overall methodology. Through a web platform such as WeValue.org, supplemented by occasional conferences or technical workshops, participating organizations could share their experiences with the indicators and work together to address any concerns. Such virtual and real gatherings would lead, over time, to an understanding of which values-based indicators could usefully be adopted at national levels and mainstreamed into global sustainability discourses.
6. Further information
More information about the ESDinds Project (including the Programme of Work and formal deliverables submitted to the European Union) can be found in the resources section. ESDinds was funded under the European Commission’s Seventh Framework Programme (FP7) from 2009-2011, with reference number 212237. The sponsor had no involvement in the design or conduct of the research, or in the preparation of this thematic summary. The WeValue web platform has over 100 members and provides free access to a set of 166 values-based indicators, as well as detailed information on assessment methods for evaluation, sample assessment tools, case studies, and online tools to help users to select and localise indicators. Its social networking facility allows members to discuss their experiences and share files. To learn more or sign up, please visit www.WeValue.org.
Baha'i International Community (1998). Valuing Spirituality in Development: Initial Considerations Regarding the Creation of Spiritually Based Indicators for Development. A concept paper written by the Baha'i International Community for the World Faiths and Development Dialogue, Lambeth Palace, London, 18-19 February 1998. Baha'i Publishing Trust, London.
Bakhtin, M. M. (1981). The Dialogic Imagination: Four Essays by M. M. Bakhtin. Austin, TX: University of Texas Press. (Original work published 1979).
Cha, S. E., & Edmondson, A. C. (2006). When values backfire: Leadership, attribution, and disenchantment in a values-driven organization. Leadership Quarterly, 17: 57-78.
Gallopin, G.C. (1997). Indicators and their use: information for decision-making. In: Moldan, B., Bilharzia, S. (Eds.), Sustainability Indicators. Report on the Project on Indicators of Sustainable Development. John Wiley, Chichester, pp. 13–27.
Henshaw, J. M. ‘Measurement in business: what gets measured gets done’. Chapter 4 in Does Measurement Measure Up? How Numbers Reveal and Conceal the Truth. Baltimore, MD: Johns Hopkins University Press, 2006.
Gudmundsson, H. (2003). The policy use of environmental indicators - learning from evaluation research. Journal of Transdisciplinary Environmental Studies 2 (2), 1–12.
Meadows, D. (1998). Indicators and Information Systems for Sustainable Development: Report to the Balaton Group. The Sustainability Institute, Hartland Four Corners.
Reed, M.S., Fraser, E.D.G., Dougill, A.J. (2006). An adaptive learning process for developing and applying sustainability indicators with local communities. Ecological Economics 59, 406-418
Rokeach, M. (1973). The Nature of Human Values. New York: Free Press.
Rokeach, M. (1979). Understanding Human Values. New York: Free Press.
Talamo, A., Pozzi, S. (2011). The tension between dialogicality and interobjectivity in cooperative activities. Culture & Psychology 17, 302-318.