As a method of technology design, VSD is often described as a ‘principled approach’, given its overt orientation towards designing technologies for human values rather than consigning them to ad hoc afterthoughts (Friedman, 1996). With almost 30 years of history and development underlying the approach, co-creation between both direct and indirect stakeholders[1] is a fundamental part of the design process, as is the philosophical investigation of values (Umbrello, 2018). Past research has explored how VSD can be applied to specific technologies, such as energy transition systems (Mok & Hyysalo, 2018), mobile phone usage (Woelfer et al., 2011), industrial processes (Longo et al., 2020), and more recent systems of augmented reality (Friedman & Kahn Jr., 2000), to name just a few. It has similarly been proposed as a suitable design framework for future technologies, both in the short and long term. Examples include its exploratory application to nanopharmaceuticals (Timmermans et al., 2011), molecular manufacturing (Umbrello, 2019), care robots (Umbrello et al., 2021; van Wynsberghe, 2013), and less futuristic autonomous vehicles (Calvert et al., 2018; Thornton et al., 2018; Umbrello & Yampolskiy, 2021).
Despite all these uses, VSD has only been applied to AI systems conceptually, as AI’s self-learning capabilities pose some unique challenges for the VSD approach. To combat these, Umbrello and van de Poel (2021) suggest adding a set of AI-specific design principles to VSD predicated on the advancements made in the various AI for Social Good (AI4SG) projects (Mabaso, 2020; Taddeo & Floridi, 2018). However, even these more specific norms are insufficient and require additional value sources that can be harmonised with the intention of designing AI4SG using VSD. Stakeholder values represent one such source, which are constituent of ‘context analysis’ in the authors’ four-stage VSD approach. They argue that context is crucial in all AI design:
In all cases […], different contextual variables come into play to impact the way values are understood (in the second phase), both in conceptual terms as well as in practice, on account of different sociocultural and political norms. The VSD approach sees eliciting stakeholders in sociocultural contexts as imperative. This will determine whether the explicated values of the project are faithful to those of the stakeholders, both directly and indirectly. Empirical investigations thus play a key role in determining potential boons and downfalls for any given context. (Umbrello & van de Poel, 2021, p. 7).
To understand the importance of this, both to VSD more broadly and the design of the AI system in particular, the inner workings of VSD merit brief discussion. Sometimes heralded under the auspices of somewhat different names, such as ‘Values at Play’ or ‘Design for Values’ (Flanagan & Nissenbaum, 2014; van den Hoven et al., 2015), VSD is traditionally described as a three-phase methodology comprising conceptual, empirical, and technical investigations (van den Hoven & Manders-Huits, 2009). Moreover, the tripartite approach can be engaged with iteratively or consecutively (see Fig. 1).
Conceptual investigations involve a priori analysis of the potential value implications and identification of direct and indirect stakeholders, as well as the likely value tensions. This phase also involves coming up with working definitions of values that can then inform (and be informed by) the other investigations. Empirical investigations involve eliciting data from the stakeholders themselves in an attempt to determine their values and value understandings. This information feeds back into the other phases to help refine the working definition of the ‘value at play’. Finally, technical investigations look at the technology itself, or, more specifically, how the architecture and design choices of the system might support and/or constrain those values.
Philosophically speaking, the entire VSD approach is premised on the interactional stance regarding technology. VSD thus argues against the value-neutrality thesis of technology (i.e., instrumentalism) and instead claims that technologies embody the values of their creators. This means that they display properties that are both deterministic as well as constructionist (Friedman & Hendry, 2019). This is a salient way of understanding technological artefacts’ sociotechnicity (as in the case of Winner’s bridges). Societal forces and technologies co-construct, co-vary, and co-constitute each other (Ropohl, 1999). VSD is currently equipped with seventeen specific methods to facilitate systems design in light of sociotechnicity: (1) stakeholder analysis; (2) stakeholder tokens; (3) value source analysis; (4) coevolution of technology and social structure; (5) value scenarios; (6) value sketches; (7) value-oriented semi-structured interview; (8) scalable assessments of information dimensions; (9) value-oriented coding manual; (10) value-oriented mock-ups, prototypes, and field deployments; (11) ethnography focused on values and technology; (12) model for informed consent online; (13) value dams and flows; (14) value sensitive action-reflection model; (15) multi-lifespan timeline; (16) multi-lifespan co-design; and (17) envisioning cards (Friedman & Hendry, 2019).
To achieve the objective of designing for human values, these methods each have their own uses. These include stakeholder identification and legitimation, value source identification and definition, determining how such values relate to their contextual social structures, and design thinking across multiple generations. The suitability of any one method is contingent on the starting point of any given engineering programme. However, part of the attractiveness of VSD is that it can and should be adapted to an individual domain of application. Crucially, it is not a wholesale reimagining of the design space, but instead maps onto and augments existing design and engineering practices. This is an important point: AI systems design is advancing at a remarkable pace globally, and because firms recognise the economic and other market advantages of adopting AI systems, they are more than willing to adopt less-than-ready systems despite the potential for recalcitrance (see e.g., Banerjee & Chanda, 2020). As a result, an adaptable design approach that can be cost-effectively mapped onto existing design practices is invaluable. Although little work has been done on this point regarding VSD, a clear objective of this design methodology is that it should not replace but rather complement the day-to-day practices of technology designers (Friedman & Hendry, 2019; van de Poel, 2018). Although specific VSD tools may indeed take more time to implement than others, there are nonetheless VSD methods available to AI systems designers that can help them avoid many pitfalls caused by short-term, market imperative thinking.
I am here referring primarily to VSD’s systems-oriented approach. An explicit aspect of the interactional stance on technology means looking not only at discrete technologies but also viewing them as fundamental and inseparable constituents of social forces, organisations, institutions, and infrastructures (i.e., as systems-within-systems). Likewise, VSD takes a complete systems view of this broader design context by including the various direct and indirect stakeholders implicated in these systems. As mentioned above, VSD, like systems engineering generally, draws on the theories and methods of multiple disciplines to achieve greater equifinality in design. Mapping out the long-term network effects that the system can produce is therefore necessary, though it runs contrary to much of the previously discussed short-term thinking that characterises most modern innovation practices. This short termism cannot be risked with transformative technologies like AI.
The following section proposes the use of envisioning cards as an easily adoptable way for AI design firms to engage in VSD while also minimising drastic internal changes, thus harmonising their economic incentives with critical human values. This approach permits long-term, multi-generational thinking for a wider group of stakeholders, which is highly relevant for AI and other globally impactful sociotechnical systems.
[1] ‘Direct stakeholders’ are those who may be impacted via direct interaction with the technology. They can include users, designers, and some managers. ‘Indirect stakeholders’ are those who may be impacted by the systems but do not directly interact with it. They can include stakeholder groups like executives, other publics, and the environment or nonhuman animals.