“`html
Merely a few months following Elon Musk’s withdrawal from his informal role directing the Department of Government Efficiency (DOGE), we have a more distinct understanding of his concept of governance driven by artificial intelligence, which appears to focus much more on centralizing authority than on serving the public. Nonetheless, we must keep in mind that a different leadership could utilize the same technology to promote a more favorable future for AI within the government.
For many on the American left, the ultimate goal of DOGE embodies a nightmarish vision of a state operated by automated systems that serves an elite minority at the cost of the general populace. This scenario encompasses AI altering government regulations on a grand scale, salary-free bots supplanting human roles, and nonpartisan civil services coerced into adopting a disturbingly prejudiced and antisemitic Grok AI chatbot designed by Musk in his likeness. And yet, despite Musk’s assertions regarding enhancing efficiency, minimal cost reductions have emerged and few successful instances of automation have been accomplished.
From the inception of the second Trump administration, DOGE served as a substitute for the US Digital Service. This organization, which was established during the Obama administration to empower agencies throughout the executive branch with technical assistance, was replaced by one reportedly tasked with traumatizing their workforce and cutting resources. The issue in this specific dystopia isn’t the machines and their extraordinary capabilities (or lack thereof) but rather the objectives of the individuals behind them.
One of the most significant effects of the Trump administration and DOGE’s initiatives has been to create political polarization surrounding AI discourse. Despite the administration denouncing “woke AI” and the alleged liberal partiality of Big Tech, some studies indicate that the American left has become measurably more resistant to developing the technology and skeptical about its prospective effects on their future compared to their right-leaning counterparts. This reflects a familiar trend in US politics, of course, but it suggests a potential political realignment with enormous implications.
Individuals are ethically and strategically justified to urge the Democratic Party to lessen its reliance on financial support from billionaires and corporations, particularly within the tech industry. However, this movement should disconnect the technologies advocated by Big Tech from those corporate agendas. Optimism regarding the potentially beneficial applications of AI does not necessitate endorsing the Big Tech firms that currently dominate AI advancement. To view the technology as inseparable from the corporations is to risk unilateral disarmament as AI redistributes power throughout democracy. AI can serve as a legitimate instrument for empowering workers, governing effectively, and promoting the public good, even while it is manipulated as a tool for oligarchs to enrich themselves and further their interests.
A proactive variant of DOGE could have redirected the Digital Service to coordinate and promote the numerous AI use cases already being examined across the US government. By following the lead of nations like Canada, each instance could have been mandated to provide a detailed public disclosure on how they would adhere to a unified framework of principles for responsible usage that safeguards civil rights while enhancing government efficiency.
Diverted towards different objectives, AI could have generated celebrated achievements rather than national scandals.
A different administration might have made AI translation resources widely accessible in government services to eliminate language barriers for US citizens, residents, and visitors, rather than revoking some of the modest translation mandates previously established. AI could have been utilized to expedite eligibility assessments for Social Security disability benefits by conducting preliminary document evaluations, significantly alleviating the notorious backlog of 30,000 Americans who pass away each year while awaiting review. Instead, the fatalities of individuals pending benefits may now double due to reductions enacted by DOGE. This technology could have aided in accelerating the ministerial duties of federal immigration judges, assisting them in reducing a backlog of millions of pending cases. Rather, the judicial systems now must navigate this backlog amidst dismissals of immigration judges, despite the backlog.
To achieve these constructive outcomes, significant changes are necessary. Electing leaders dedicated to utilizing AI more judiciously within government would be beneficial, but the solution is far more rooted in principles and values than it is in technology. As historian Melvin Kranzberg noted, technology is never impartial: its repercussions are contingent on the contexts in which it is employed and the purposes it is directed toward. In essence, the favorable or unfavorable nature of technology hinges on the decisions made by those who wield it.
The Trump administration’s initiative to harness AI for advancing their regulatory rollback serves as a clear example. DOGE has unveiled an “AI Deregulation Decision Tool” that it aims to employ through automated decision-making to eliminate approximately half of a catalog comprising nearly 200,000 federal regulations. This follows similar proposals to apply AI for extensive revisions of the administrative code in Ohio, Virginia, and the US Congress.
This kind of legal amendment could potentially be approached in a nonpartisan and nonideological manner, at least theoretically. It could be assigned the task of eliminating outdated regulations from centuries ago, streamlining redundant provisions.
“`
and updating and harmonizing legal terminology. Such a neutral, nonpartisan legislative overhaul has been carried out in Ireland—by individuals, not AI—and in various other regions. AI is exceptionally qualified for this type of linguistic examination on a large scale and at an extraordinary speed.
However, we must never become complacent with the belief that AI will be utilized in this impartial manner. The advocates of the Ohio, Virginia, congressional, and DOGE initiatives are explicitly motivated by ideological goals. They perceive “AI as a catalyst for deregulation,” as articulated by a US senator who supports this view, liberating corporations from regulations they argue hinder economic expansion. In such a backdrop, AI lacks the ability to be an impartial analyst executing a practical function; rather, it becomes a tool for human advocates with a biased agenda.
The essence of this narrative is that favorable results for employees and the collective interest can be realized as AI revolutionizes governance, but it necessitates two conditions: selecting representatives who genuinely champion and act on behalf of the public good and enhancing transparency in the way technology is utilized by the government.
Agencies ought to adopt technologies within ethical frameworks, monitored by independent inspectors and supported by legislation. Public oversight helps to secure the commitment of current and future administrations to apply such technologies in the public interest and to prevent corruption.
These concepts are not novel and represent the very safeguards that Trump, Musk, and DOGE have swept aside in the past half-year. Transparency and privacy standards were bypassed or disregarded, independent agency inspectors general were dismissed and congressional budgetary mandates were compromised. For months, it has remained unclear who is responsible for and accountable for DOGE’s conduct. Under these circumstances, the public ought to maintain a similar skepticism regarding any executive’s application of AI.
We believe that everyone should approach today’s AI landscape with caution and the influential figures that are guiding it toward their own objectives. Nonetheless, we must also recognize that technology can be distinguished from the individuals who create, manipulate, and profit from it, and that beneficial applications of AI are both feasible and attainable.
This article was penned with Nathan E. Sanders, and initially appeared in Tech Policy Press.