Given the glaring absence of national legislation in the U.S. regulating new technologies, the recent release of several executive orders on artificial intelligence (AI) at the federal level and in California is significant. President Biden’s executive order in particular is effectively the first national public policy on AI to be established in this country. Moreover, these orders have the potential to result in concrete progress on protecting workers and ensuring they benefit from emerging digital technologies. That potential stems not just from the government’s regulatory power but also from its large footprint as a purchaser of technology, a contractor with private sector businesses, and a provider and funder of public services.
We believe that governments at all levels should play an incisive leadership role in establishing robust standards around the use of AI and other digital technologies, prioritizing the safety and well-being of impacted communities—especially low-income communities and communities of color—as well as public sector workers and other workers connected to public funding.
Moreover, public sector workers stand to play a critical role in safeguarding the public good against potential harms of government’s use of technologies capable of autonomous reasoning and decision making. For this safeguard to be meaningful, workers must have the skills and authority needed to implement accountability strategies, as well as the right to bargain fully over technology design and impacts.
Below we outline core principles for how government action on AI can benefit public and private sector workers, and comment on how two recent executive orders reflect those principles. Our goal is to help inform the significant workthat lies ahead for federal, state, and local governments in their efforts to model responsible use of AI.
AI in the Workplace
The recent executive orders on artificial intelligence by President Biden at the federal level and Governor Newsom in California cover broad ground, outlining potential risks of AI and directing agencies to develop guidelines for responsible use. Impacted communities have a significant stake in how these executive orders are interpreted, including persons in the criminal justice system, patients in the healthcare system, students and teachers in public schools, and the public in general.
Workers and their unions have a stake as well. Across the country, employers are increasingly adopting AI and other digital technologies such as electronic monitoring, algorithmic management, and task automation to make a wide range of employment and production-related decisions. These decisions can affect wages, benefits, work schedules, hiring and firing, discipline and promotion, the number and location of jobs, skill requirements, workloads, and workplace health and safety.
Moreover, as governments introduce AI and related technologies into work processes, public sector workers will see their jobs fundamentally transformed, affecting the skills required to do their work, their interaction with clients, their autonomy over work products, and more. Research is raising concerns about negative impacts on mental and physical health, blurring of boundaries between home and work, invasions of worker privacy, inadequate training supports, and difficulty maintaining service quality when new technologies are rolled out. Introducing advanced technologies such as AI can exacerbate power imbalances in the workplace, leaving workers struggling to understand, challenge, or explain outcomes of algorithmic decision making. Finally, the efficiencies and cost savings enabled by technology may not accrue to workers, even as their jobs become less secure, more intense, and are paid less.
Government Strategies to Ensure Responsible Workplace Tech
1. Leverage the full scope of government spending to promote responsible use of AI
Discussions about government leadership on AI often focus on procurement as an important site for responsible use standards. But the net should be cast wider: federal, state, and local governments should leverage all of the avenues where taxpayer dollars directly or indirectly touch AI to apply such standards. The footprint of government spending extends through three main channels, each of which can have direct impacts on both public and private sector workers.
- AI procurement: When a government buys AI systems for its own use, both as an employer and as a service provider, strong procurement standards can set criteria for how agencies bid, contract, and evaluate such systems. Governments may procure AI technologies like scheduling or productivity software for workforce management, or systems that public sector workers use to administer benefits such as chatbots or automatic data processing.
- Contracting: Governments often contract with private sector companies and nonprofits to deliver services to the public (for example, nonprofit agencies that administer job training programs or manage public housing developments). Governments should set specific standards on AI use that contractors must comply with, ranging from how client data is collected and protected, to protections for workers interacting with AI.
- Program funding: Governments fund programs operated by public agencies or nonprofits through competitive grants or funding transfers that impact a wide range of workers and their working conditions. Examples are grants to nonprofits to provide training on educational technology, grants to police departments for purchasing surveillance equipment, and Medicaid funding that supports hospitals and local clinics. Programs and businesses that receive public funding should be required to comply with responsible AI use standards, both for the public and clients and for their workers.
The executive orders: Both executive orders identify AI procurement as a key government activity where guidelines will need to be developed and specify some parameters for those guidelines (such as requiring risk analyses and anti-discrimination safeguards). The California order does not explicitly identify contracts and program funding as other areas requiring AI use standards. The federal order effectively does so by directing relevant agencies to ensure the responsible development and deployment of AI in specific sectors—healthcare, energy, housing, criminal justice, and education—especially as regards bias and discrimination.
2. Adopt and require a robust set of worker technology rights and protections
Over the past several years, unions and other worker advocates have developed principles and legal standards to ensure that workers benefit from, and are not harmed by, AI and other digital technologies. Ultimately, these principles will need to be enshrined in new laws, since existing U.S. employment and labor laws do not suffice.
That said, the government can play a critical modeling role by adopting robust worker protections. It should do so both in its own use of digital technologies in the public sector workplace, and by mandating such protections for businesses that are touched by government funding through the pathways described in #1 above. Key principles for protecting workers include:
- Worker data rights: Workers should have the right to know when employers are collecting data on them and surveilling them, and for what purpose. They should have the right to access and correct their data and to opt-out of employers selling or sharing their data.
- Guardrails on how employers use digital technologies: Employers should only use electronic monitoring and algorithmic management for narrow business purposes, without harm to workers. In particular, the unreliability of these systems means they shouldn’t be used to automate decisions like hiring, firing, and discipline. Productivity management systems should be subject to health and safety regulation and review by government agencies.
- Prohibitions on discrimination: Data-driven technologies such as hiring software should not discriminate against workers based on race, gender, disability, and other protected characteristics. As another guard against bias, employers should be prohibited from using predictions about a worker’s traits and behaviors that are unrelated to their job responsibilities.
- Right to organize and bargain: Workers should have the right to organize and fully bargain over employers’ use of AI and other technologies. And as recently affirmed by the NLRB, employers should not use digital technologies or social media monitoring to identify, monitor, or punish workers for organizing.
- Holding employers responsible for harms: Regulatory agencies should play a central role in enforcing technology standards, and workers should be able to sue employers for violations. But we should not wait for harms to occur: employers should be required to conduct impact assessments prior to using new technologies, and regularly assess impacts thereafter.
- Automation and training: Workers who are impacted by task and job automation should have the right to education and training to grow their skills and adapt to changes in job content. They should also have the right to be considered first for other job openings at their employer, and be compensated for tech-driven job loss if layoffs do occur.
The executive orders: The federal order does a much better job than California’s in recognizing the importance of worker technology rights. It directs the U.S. Department of Labor to publish principles and best practices for employers to mitigate AI’s potential harms. What remains to be developed is a clear strategy for ensuring that federal agencies adopt these standards in their AI procurement as well as contracting and funding programs. By contrast, the California order does not explicitly call for the development of responsible AI standards to protect the public sector workforce, although these could emerge during the order’s implementation.
3. Promote and model worker-centered technology development
While worker data rights and protections are critical, they alone will not be enough to ensure that AI is used for the public good. Ultimately, workers should participate fully in technological change because they possess the knowledge and experience to support responsible and effective implementation.
In particular, public sector workers and their unions should have a central role in the design, adoption, and implementation of AI and other emerging technologies. Research and past experience suggests that involving workers from the outset makes technology adoption more successful; helps to identify and prevent harms; and enables workers to do their jobs more effectively. Most importantly, public sector workers are on the front lines directly engaging with the public and are therefore best positioned to anticipate the impacts of AI on accessibility, race and gender equity, and quality and efficiency of public services.
Here we want to underline the importance of funding and training a government workforce that has the skills and experience to ensure responsible use of AI. Governments will need substantial technical and public policy expertise on AI for all stages of technology evaluation and adoption; the priority should be to build this expertise in-house, given the importance of centering the public good in a context where private sector interests often dominate.
The executive orders: Neither of the executive orders prioritizes involving workers in decision making about AI design and implementation. In particular, while both orders direct agencies to identify specific government services where AI would benefit the public (“use cases”), they offer no parameters for doing so, even though this is where workers’ expertise is especially important. More generally, the orders contain scattered language about consulting worker organizations in developing responsible use guidelines, but they do not specify a formal mechanism, level of engagement, or vision for ongoing worker participation. Both executive orders do include language about training public sector workers in the skills needed to use AI.
4. Support workers affected by technological change
The tools we currently have for supporting workers affected by technological change have never been very strong, even for those workers whose experience fits the singular model of being made completely redundant by technology. Existing displaced-worker programs such as Trade Adjustment Assistance (TAA) and Worker Adjustment and Retraining Notification (WARN) are unlikely to be a good fit for many scenarios of emerging digital technologies, where the pace of change is typically incremental and occupations evolve gradually with partial task automation and augmentation, rather than the mass plant closings that shaped these programs.
Here we will need significant policy innovation. In some cases, for example, jobs will only experience partial automation of a subset of tasks while taking on new ones, and can best be thought of as evolving over time with technological change. In other cases, a set of job functions might migrate from one industry segment to another, as technology changes where, when, and how work is done. And in many sectors, the challenge will be adapting the work process as humans increasingly work alongside and in collaboration with complex technological systems. Finally, specific occupations may in fact experience rapid automation. And there are likely more scenarios that we are only now discovering.
The question is, what do we want to have happen in each of these scenarios, and how can public policy support good outcomes for workers, employers, and the public? To be clear, technological change is not inevitable. Unions should be able to negotiate over new technology and whether and how it’s adopted, and public policies should establish strong guardrails to ensure responsible development and deployment.
That said, education, training, and reskilling of incumbent and prospective workers will need to be one ingredient of any response, and here the challenge is that the U.S. has a weak adult training system. The formal workforce development system established by the Workforce Innovation and Opportunity Act (WIOA) has a small footprint. Rather, it is the sprawling network of community colleges that serves as the country’s training system for the majority of adult workers. Ideally, this network would be the basis of a regional, sectoral training system that gives workers the skills and career pathways to move within and between employers in response to technological change.
Unions have significant leadership expertise and experience to bring to the table here. Some of the most successful models for incumbent worker training in the U.S. have been labor-management training partnerships, where unions and high-road employers work collaboratively to solve challenges in their industry. Crucially, these partnerships prioritize attention to job quality and protecting economic security, as well as delivering on-the-job, paid training. Similarly, union apprenticeship programs have a proven track record of bringing new workers into an industry, delivering quality training at the same time as they ensure placement of workers in good jobs.
The executive orders: The federal executive order calls for the adaptation of current job training and education programs to assist workers impacted by technological change, but with few details and largely through a narrow displaced worker lens. The California order makes a general call for the development of agency guidelines to support state workers in the adoption of AI tools, but provides few details about the scope of that support.
5. Invest in applied technology impact studies
A key challenge facing both worker advocates and policymakers is that we don’t know enough about the variety of workplace management technologies being developed, nor of the range of impacts they can have on workers (both good and bad). There are real obstacles to research here, since digital technologies are often obscure and not readily observable by workers, and impacts on workers are not always easy to quantify or identify in the short term (such as loss of autonomy).
But good public policy requires a thorough understanding of the problem being addressed. Government should therefore fund applied, fast-turnaround studies on how emerging workplace technologies such as electronic monitoring, algorithmic management, and AI are impacting workers. Specifically:
- Research projects should analyze the technologies currently being used as well as those being developed; how employers are using them and the determinants of adoption; and the actual and potential future effects on workers.
- Effects on workers to be studied should include (a) whether new digital technologies are enabling employers to violate existing anti-discrimination, wage and hour, health and safety, and right to organize laws, as well as (b) a broader set of effects that potentially require regulation (i.e., deskilling, job and wage loss, productivity speed-ups, and outsourcing/shifts to contingent work).
- The technologies to be studied should prioritize ones that are currently of particular concern, such as facial recognition, invasive surveillance, behavioral analytics, psychological profiling software, productivity management systems, generative AI, and others.
- While a range of methods will be useful, we especially recommend the utility of conducting in-depth industry studies, based on their ability to generate concrete information about specific jobs and technologies useful to policymakers and worker advocates; see the Labor Center’s industry studies for examples.
- In parallel, labor agencies should conduct a series of industry-specific random workplace audits (as appropriate under their authorizing laws) to establish whether and how often worker management technologies are used to violate or evade employment and labor laws.
The executive orders: Both executive orders charge their relevant agencies with conducting research on the impacts of AI and other digital technologies on workers, but with very little detail about the scope of that research. In addition, both orders appear to be more concerned with automation than with effects on job quality.
Looking Ahead
Government’s fundamental responsibility is to uphold people’s basic rights to freedom, privacy, livelihood, and equal treatment. New technologies should be considered where they offer the greatest potential benefit to both workers and the public, where clear expectations and measurable metrics for those benefits can be articulated, and where frameworks can be implemented to monitor and reduce risks.
We understand that the recent executive orders on AI are only the first steps in what will be months and years of work by government agencies to develop and implement core ideas laid out in the orders. Above we have outlined specific strategies to ensure that workplace technologies are consistently addressed, and public sector workers consistently included, throughout this process. Most importantly, we have argued that worker participation will be key to ensuring that the delivery of public services benefits from the introduction of AI, rather than being degraded by it.
with Annette Bernhardt
originally published at: https://laborcenter.berkeley.edu/what-workers-and-unions-stand-to-gain-from-recent-executive-orders-on-artificial-intelligence/