We Will Not Be Pacified by Participation
By Alex Ahmed
In the sciences, the term collaborators usually refers to academics or consultants brought on to advise a research team. In research with and about people, human subjects are recruited to contribute data to a study, for example, by filling out a survey. When most people outside of academia encounter science (and scientists), it is as a subject and not as an equal partner. This hierarchical barrier solidifies the researcher as an authoritative, dispassionate, and neutral figure capable of objective observation. For decades, though, scientists have explored different approaches that could erode this barrier in the name of cooperation and mutual learning.
One of these approaches, called participatory design (PD), originated in the field of computer science.1 Its basic premise is that technology designers should consult with and learn from the eventual users of whatever they are building. To this end, researchers interview people about their experiences using a piece of software and solicit feedback. Today, this method is widely used in the technology sector to improve marketability and usability of products, but the originators of PD had explicitly political intentions: to collaborate with unions and bolster unions’ power in their fight against tech-enabled exploitation by management.2
The history of PD has a lot to teach researchers on the left, especially those who seek to cooperate and build with others outside of the academy. PD’s successes are worth attending to, but so are its limitations. I present PD as one example of how “participation” and “collaboration” can be wielded as weapons against workers and marginalized people. In these cases, those in power create systems that appear to be an avenue for people to address their concerns, but are ineffectual at best and violent at worst. Rather than argue for specific solutions, I conclude by showing how workers inside and outside the academy have already organized together to articulate, fight for, and win demands.
In the early 1970s, Norwegian computer scientist Kristen Nygaard, along with his contemporaries in Sweden and Denmark, were exploring a new set of methods that would eventually be known as PD. These researchers were concerned that workers and unions did not have a say in the introduction of computerized tools in the workplace, and so they built partnerships “to build up technical and organizational competence among workers,” which they thought would ultimately help them win improvements of working conditions.3 Rather than adhering to dispassionate observation, the researchers were explicit about issues of power and control in the workplace, noting that “the primary objective… is to contribute to the efforts of trade unions” rather than management, and that “an ideal of the project is to assure the trade unions a democratic control of the research process.”4
These initial projects involved unionized metalworkers, office workers, and chemical industry workers in Western Europe, and led to increased bargaining power for their respective unions and some legislative victories. For example, a new law required management to disclose to workers any plans for introducing new technologies, such as automation or worker monitoring. Nygaard’s participatory projects provide an example of how researchers can collaborate effectively with workers, who can determine the direction of the research, and immediately bring research findings into bargaining sessions with employers. But if workers are not organized into a union, they would have no mandate or structure to bargain with their employer or call for change in practices. This poses a problem, especially in largely unorganized industries or locations such as the United States.
These participatory projects undermined workers and their organizations by collaborating with management, and flattened power dynamics by including both managers and workers in the research as supposedly equal stakeholders.
Even under ideal conditions (strong unions that directed the research themselves), Nygaard’s method was limited in understanding the wider impacts of technology outside of the specific workplace being studied.5 As a case in point, the managers of a metalworks factory introduced a new production control system. Researchers assisted the workers in discovering that the new system would centralize the pace and organization of daily work. Management argued that the workers simply held a “psychological resistance” to the technology, and aimed to roll out the system in all departments. In defiance, the union decided to organize against the system’s installation—and they succeeded. Despite understanding the harmful effects of the new system, the workers could not alter the system itself, nor challenge the fact that it was designed to streamline and control production, removing autonomy from workers. Only through collective organizing could workers act upon their research findings to stop the system from being deployed.
These limitations only grew more stark over time. Beginning in the 1980s and 1990s, PD became more fully enmeshed within corporate agendas, with management participating in research projects alongside workers.5 These projects undermined workers and their organizations by collaborating with management, and flattened power dynamics by including both managers and workers in the research as supposedly equal stakeholders.
The idea of labor-management cooperation has deep roots, stretching to the heated labor struggles of the early 1900s. That legacy is very much alive today. Historian Toni Gilpin describes how multiple factors—the rise of the Industrial Workers of the World, the 1917 Russian Revolution, and the 1919 founding of the Communist Party of America among them— led American industrial leaders to convene the Special Conference Committee (SCC). The bosses opted for a change of course from decades of brazenly murdering striking workers in the streets with the help of the police. Instead, they “proclaimed a dawning era of industrial cooperation that would render unions obsolete.”6
In April 1919, the SCC proposed a “works council” for capital and labor to “work out their problems.”7 Under one such system, farm equipment workers chose representatives to speak on their behalf, who joined with management representatives in equal numbers to collaborate on problems facing the workplace. But this corporate benevolence, unsurprisingly, did not lead to improved conditions for workers. The council would tie on votes concerning wage raises (with management opposed, and workers in favor). Ties would be broken by the company president. Certain topics, such as substantive changes to the organization or pace of work, were entirely off the table. Instead, workers’ patience was tried by discussions on “how to keep lunchroom milk colder” and “a company vice-president’s recent trip to Europe.”8 Importantly, workers who continued to organize outside of corporate-approved channels would still be swiftly met with police violence. The council system also provided the company with a new way to surveil potentially meddlesome workers. They were funneled into these ineffectual councils, alerting the company to their presence and opening them up to discipline or termination. Workers quickly recognized that the council system was a tool of management, even though it was billed as benevolent.
Today, scientists are engaging in projects that essentially recreate the works council system. The state, corporations, and academia have converged to promote the creation of technologies which will supposedly benefit the public. Artificial intelligence (AI) systems are one example. Recent work by Stanford researcher Gregory Falco has proposed that “participatory AI” could rehabilitate AI’s image such that it “can be seen as an ethical and trustworthy city asset rather than an adversary fraught with controversy and bias.”9 In fact, these participatory projects often arise due to grassroots opposition to the very same creations that researchers relentlessly pursue with state and corporate backing.
Falco dismisses those public concerns a priori, and suggests that AI systems should be subjected to a community “feedback period.” In his proposal, public comments about AI tools will be fed through—without a drop of irony—yet another AI tool, which would sort and categorize public comments for state officials, who would remain the ultimate decision makers. As a case in point, he offers the algorithm governing the Chicago Police Department’s “strategic subject list” (SSL), which could have been “less biased and more socially responsible” if governed by such a process.10 Developed by researchers at the Illinois Institute of Technology, the SSL was ultimately just another way for the police to continue racial profiling of Black people.11 By “predicting violence,” the system perpetuated violence, resulting in a horrific “self-fulfilling prophecy.” A year after Falco wrote his paper, the SSL was discontinued.
By tapping into our own exploitation as workers, we [scientists] can see that we’re not so different from our “participants” after all.
And yet, confronted with endless examples of AI systems perpetuating violence against poor people and people of color, starry-eyed researchers still see AI as something that can be “realigned” to “serve the interests of all.”12 Their word choice, realignment, implies that technology development was once on the right track but that at some point lost its way. Participatory methods are supposed to facilitate this course correction by presumably making systems more inclusive, responsive, or benevolent. We are to believe that crowdsourced technological improvements will eventually trickle down to the people and produce a better society. Some AI researchers are now questioning these claims; although they correctly identify the false promise of participation, their vague solution misplaces faith in the industry to reform itself. This brand of PD fails to examine the power dynamics of technology development and use, opting instead to “benefit all stakeholders involved.”13
All stakeholders should not and cannot be prioritized equally. Just as in the SCC’s works council system, capital has its fingers on the scale. The advancement of industry is the primary aim and result of participation, which adds new layers of exploitation and surveillance in the process. Worker-centered PD, as originally practiced by Kristen Nygaard and his contemporaries, is one way researchers might salvage projects that would otherwise wind up supporting the goals of capital. We must take up a power analysis that speaks to the material reality of class struggle, rather than to hollow labor-management “cooperation.”
In participatory research, people are often recruited to offer token input on systems over which they have no real control or power. Scientists must recognize how we are also governed by similar systems ourselves, through administration-controlled faculty senates, listening sessions, and diversity and inclusion committees. By tapping into our own exploitation as workers, we can see that we’re not so different from our “participants” after all.
We can look to existing campaigns for inspiration. Rutgers faculty union president Todd Wolfson describes how student, worker, and community voices are absent from university decision-making processes.14 In pursuit of a vision of democratic governance, the union collaborated with local organizations in New Brunswick on a study investigating the key concerns of residents and Rutgers students. Using this information, the union-community partnership is now crafting a campaign around shared issues: low-wage work, housing, and health care. In doing so, organizers contributed to a series of remarkable labor organizing campaigns in education (see the Chicago and LA Teachers Unions, for other recent examples). By recognizing the links between workplaces, employers, and the broader community environment, workers organized to fight for change outside of their immediate workplaces. This framework is called “bargaining for the common good.”15
While the methods used by the Rutgers-New Brunswick study (e.g., interviews and surveys) are the same as those used in PD, the difference lies in how the study is applied, by whom, and toward what end. The labor coalition at Rutgers aims to create a durable framework through which workers and students can “change the balance of power,” not just at the university, but in the city at large. This is accomplished, in part, by transforming the role of research. Rather than understanding a social issue as an object to be studied, from which papers and grants are mined, we can instead begin with a focus on our relationships with each other and with power structures, develop a political analysis and vision as a coalition, and initiate research as a tool toward realizing concrete demands. In this way, the ongoing relationships between people do not depend on the completion of research, but the other way around.
It must be stressed also that this view of research—as a collective endeavor, an organizing tool, and a means to a strategic political end that is outside the probing eye of administrators and funding structures—is not conducive to career advancement as an academic. If it were, it wouldn’t be effective. Under the status quo, the only role for scientists in a social movement is to act as support staff, providing occasional service to beleaguered workers, rather than to engage in political action ourselves. True cooperation requires that we organize together in pursuit of common goals, in opposition to common enemies, along class lines.
This article is part of our Winter 2021 issue: Cooperation. Subscribe or purchase this issue to get full access now.
- Morten Kyng and Lars Mathiassen, “Systems Development and Trade Union Activities,” DAIMI Report Series 8, no. 99 (January 1980): 5.
- Finn Kensing and Jeanette Blomberg, “Participatory Design: Issues and Concerns,” Computer Supported Cooperative Work 7 (1998): 170.
- Kensing and Blomberg, “Participatory Design,” 170.
- Kyng and Mathiassen, “Systems Development,” 5.
- Kensing and Blomberg, “Participatory Design,” 170.
- Toni Gilpin, The Long Deep Grudge: A Story of Big Capital, Radical Labor, and Class War in the American Heartland (Chicago: Haymarket Books, 2020), 47.
- Gilpin, The Long Deep Grudge, 47.
- Gilpin, The Long Deep Grudge, 51.
- Gregory Falco, “Participatory AI: Reducing AI Bias and Developing Socially Responsible AI in Smart Cities,” IEEE International Conference on Computational Science and Engineering and IEEE International Conference on Embedded and Ubiquitous Computing (August 2019): 154.
- Falco, “Participatory AI,” 157.
- Matt Stroud, “Heat Listed,” The Verge, May 24, 2021, https://www.theverge.com/22444020/chicago-pd-predictive-policing-heat-list.
- William S. Isaac, Shakir Mohamed, and Marie-Therese Png, “Forum Response: Decolonizing AI,” Boston Review, May 20, 2021, https://bostonreview.net/forum_response/decolonizing-ai/.
- Isaac, Mohamed, and Png, “Decolonizing AI.”
- Todd Wolfson and Astra Taylor, “Beyond the Neoliberal University,” Boston Review, July 30, 2020, https://bostonreview.net/class-inequality/todd-wolfson-astra-taylor-beyond-neoliberal-university.
- Marilyn Sneiderman and Secky Fascione, “Going on Offense During Challenging Times,” New Labor Forum, CUNY School of Labor and Urban Studies, January 2018, https://newlaborforum.cuny.edu/2018/01/18/going-on-offense-during-challenging-times/.