“`html

Artwork by Stuart Kinlough/Ikon Images
Science & Technology
Methodologies for AI regulation
Experts from business, economics, healthcare, and policy provide perspectives on topics that warrant attention
The velocity of AI progress is accelerating, and its impact on the economy, education, healthcare, research, employment, legislation, and daily life will be extensive and significant. Initiatives for regulation are emerging at both federal and state levels.
In July, President Trump revealed executive orders along with an AI action framework aimed at hastening the advancement of artificial intelligence and establishing the U.S. as the preeminent force in this technology.
The array of adjustments prevents the federal government from acquiring AI solutions it deems ideologically biased; relaxes limitations on the permitting procedures for new AI infrastructural projects; and encourages the global export of American AI innovations, among other alterations.
The National Conference of State Legislatures indicates that in the 2025 session all 50 states reviewed propositions.
Researchers on campus from various disciplines share their views on areas that need examination.

Dangers of illicit scams, price-fixing conspiracies
Eugene Soltes holds the McLean Family Professorship of Business Administration at Harvard Business School.
As artificial intelligence becomes increasingly integrated within the ecosystems of business and finance, we are swiftly witnessing the emergence of unprecedented dangers that our legal systems and corporate entities are ill-equipped to tackle.
Take algorithmic pricing, for instance. Corporations utilizing AI to enhance profits can already observe bots autonomously “discovering” that colluding on prices leads to greater earnings. When companies’ algorithms subtly collaborate to raise prices, who is accountable — the corporations, software developers, or engineers? Current antitrust regulations provide no definitive answer.
The peril intensifies when AI’s optimization abilities directly influence human actions.
Research shows that AI possesses persuasive skills that surpass those of experienced negotiators. When applied to at-risk populations, AI turns conventional scams into personalized, AI-customized schemes.
“Pig-butchering frauds” [where criminals gradually build trust with victims] that once necessitated teams of human operators can now be automated, customized, and widely disseminated, deceiving even the most vigilant individuals with deep-fake audio and video.
Research shows that AI possesses persuasive skills that surpass those of experienced negotiators. When applied to at-risk populations, AI turns conventional scams into personalized, AI-customized schemes.
Most concerning is the potential for AI agents with direct access to financial systems, notably within cryptocurrency networks.
Imagine an AI agent granted access to a cryptocurrency wallet and instructed to “expand its portfolio.” Unlike traditional banking, where transactions can be halted or reversed, once an AI executes a fraudulent smart contract or instigates a detrimental transaction, no authority can intervene.
The combination of unchangeable smart contracts and autonomous crypto transactions generates extraordinary risks — including automated bounty systems for real-world violence that operate without human involvement.
These scenarios are not merely speculative; they are emerging realities our existing institutions cannot effectively mitigate or prosecute. However, solutions do exist: improved crypto surveillance, mandatory kill switches for AI agents, and human oversight requirements for models.
Addressing these challenges necessitates collaboration among innovators who develop AI technology and governments tasked with regulating its potential for harm.
The issue isn’t whether these risks will arise, but whether we’ll take action before they do.

Opting for a path of pluralism
Danielle Allen is the James Bryant Conant University Professor and Director of the Edmond and Lily Safra Center for Ethics. She also leads the Democratic Knowledge Project and the Allen Lab for Democracy Renovation at the Harvard Kennedy School.

As those at my HKS
“““html
laboratory, the Allen Lab for Democracy Renovation, observe that three models for governing AI currently exist in the worldwide arena: an accelerationist model, an effective altruism model, and a pluralism model.
In the accelerationist model, the intention is to advance swiftly and disrupt, hastening technological progress as much as possible to discover new solutions to global challenges (ranging from labor issues to climate change), while organizing society primarily around the success of high IQ individuals.
Employment is substituted; the planet becomes redundant via access to Mars; intelligent individuals utilize tech-driven genetic selection to create even more intelligent offspring.
Within the effective altruism model, there is also an objective to advance quickly and disrupt, but there is a recognition that replacing human labor with technology will harm a significant portion of humanity. Thus, the dedication to technological advancement is coupled with a plan to share the productivity gains that accrue to tech corporations with comparatively smaller workforces through universal basic income initiatives.
In the pluralism model, technological development is oriented not towards surpassing and replacing human intelligence, but rather towards enhancing and extending the diverse forms of human intelligence with equally diverse kinds of machine intelligence.
The aim here is to activate and promote human pluralism for the benefits of creativity, innovation, and cultural richness, while fully incorporating the broad population into the productive economy.
Pennsylvania’s recent pledge to utilize technology in ways that empower rather than displace individuals is one example, as is Utah’s newly enacted Digital Choice Act, which reinstates data ownership on social media platforms to individual users and mandates interoperability of platforms, transferring authority from tech corporations to citizens and consumers.
If the U.S. aims to triumph in the AI competition as the type of society we are — a liberated society of free and equal self-governing citizens — then it is essential to embrace the third model. Let’s not forsake democracy and liberty when we discard “woke” ideologies.

Guardrails for mental health advice, support
Ryan McBain serves as an assistant professor at Harvard Medical School and is a senior policy researcher at RAND.
As more individuals — including adolescents — seek AI for mental health guidance and emotional support, regulations should aim for two objectives: mitigate harm and promote timely access to evidence-based resources. People will continue to seek sensitive inquiries from chatbots. Policies should enhance the safety and utility of those interactions, rather than attempt to eliminate them.
Some protective measures already exist.
Systems like ChatGPT and Claude frequently decline “very high-risk” suicide inquiries and redirect users to the 988 Suicide & Crisis Lifeline.
However, many situations are nuanced. Framed as learning survival knots for a camping adventure, a chatbot might explain how to tie a noose; presented as dieting for a wedding, it might recommend methods for rapid weight loss.
Regulatory priorities should capture the nuances of this emerging technology.
People will not cease asking chatbots sensitive questions. Policies should enhance the safety and utility of those interactions.
Firstly, mandate standardized, clinician-anchored benchmarks for suicide-related inquiries — with public accountability. Benchmarks should include multi-turn (back-and-forth) dialogues that provide enough context to assess the nuances described above, in which chatbots can be influenced to cross a red line.
Secondly, fortify crisis routing: with updated 988 information, geolocated resources, and “support-plus-safety” frameworks that acknowledge individuals’ emotions, promote help-seeking, and minimize detailed means of harm discussions.
Thirdly, prioritize privacy. Ban advertising and profiling in mental-health interactions, limit data retention, and enforce a “transient memory” mode for sensitive queries.
Fourthly, align claims with evidence. If a model is promoted for mental health support, it should adhere to a duty-of-care standard — through pre-deployment assessments, post-deployment monitoring, independent evaluations, and alignment with risk-management systems.
Fifthly, the administration should finance independent research through NIH and similar avenues to ensure safety assessments keep pace with model advancements.
We are still at an early stage in the AI era to establish a high standard — benchmarks, privacy protocols, and crisis routing — while fostering transparency through evaluations and public reporting.
Regulators can also incentivize performance: for instance, by allowing systems that meet rigorous criteria to offer more expansive mental health functionalities such as clinical decision support.

Embrace global collaboration
David Yang is an economics professor and director of the Center for History and Economics at Harvard, whose research draws insights from China.

Current regulations on AI are significantly shaped by a narrative of geopolitical rivalry, often seen as a zero-sum or even negative-sum game. It is vital to contest this viewpoint and acknowledge the substantial potential, and arguably necessity, for global collaboration in this technological sphere.
The history of AI evolution, with its prominently international leading teams, highlights such cooperation. For instance, defining AI as a dual-use technology can obstruct coordination on international AI safety standards and discussions.
My collaborators and I are investigating how narratives surrounding technology have transformed over decades, aiming to comprehend the dynamics and forces,
“““html
specifically how competitive narratives develop and affect policymaking.
Additionally, the U.S. AI strategy has recently emphasized sustaining American leadership in innovation and the international marketplace.
Nonetheless, AI solutions designed in one innovation center may not be applicable across all global contexts. In a recent study with my associate Josh Lerner from HBS and partners, we demonstrate that China’s rise as a significant innovation center has catalyzed creativity and entrepreneurship in other developing markets, providing solutions that are more suited to local circumstances compared to those that are merely compared with the U.S.
Thus, achieving equilibrium is essential: safeguarding U.S. AI innovation and technological supremacy while promoting local collaborations and entrepreneurship. This strategy guarantees that AI technology, its applications, and the overall trajectory of innovation remain pertinent to local environments and can reach a worldwide audience.
Ironically, relinquishing more authority could, in my perspective, fortify technology and market influence for U.S. AI pioneers.

Promote accountability alongside innovation
Paulo Carvão is a senior fellow at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School, specializing in AI regulation in the U.S.

The Trump administration’s AI Action Plan signifies a transition from cautious regulation to swift industrial growth. Presented as a rallying point for American technological leadership, the plan relies on private-sector guidance to propel innovation, global uptake, and economic expansion.
Earlier technologies, such as internet platforms and social media, emerged without regulatory frameworks. Policymakers from the 1990s to the 2010s deliberately chose to allow the sector to develop unregulated and shielded from liability.
To gain global trust in American-crafted AI, we must ensure it earns that confidence, both domestically and internationally.
The swift adoption of AI is occurring amid increased awareness of the societal consequences stemming from previous technological revolutions. Nonetheless, the sector and its primary investors advocate for utilizing a similar approach, characterized by few safeguards and abundant incentives.
What stands out about the recently unveiled strategy is what it lacks. It regards cautionary measures as obstacles to progress, placing faith in market dynamics and voluntary actions.
This might draw investment, yet it leaves significant queries unresolved: Who guarantees equity in algorithmic decision-making? How do we protect workers displaced by automation? What transpires when infrastructure investments prioritize computing abilities over community impacts?
Nevertheless, the plan correctly identifies AI as a comprehensive challenge, spanning from chips to models to standards, and acknowledges the urgent need for U.S. infrastructure and workforce enhancement. Its international strategy presents a persuasive framework for global influence.
Ultimately, innovation and accountability need not be mutually exclusive. They represent a combined necessity.
Encourage standards-based independent evaluations, foster a marketplace for compliance and audits, and enhance government capabilities to assess AI systems. To earn the world’s trust in American-engineered AI, we must ensure it secures that trust, both domestically and abroad.

Regulation that acknowledges healthcare constraints
Bernardo Bizzo is the senior director of Mass General Brigham AI and an assistant professor of radiology at Harvard Medical School.

The regulation of clinical AI has been misaligned with the challenges faced by clinicians.
To comply with existing device pathways, vendors often limit AI to specific conditions and rigid workflows. This may lower perceived risk and yield narrow performance metrics, but it also limits impact and adoption, failing to address the real challenge in U.S. healthcare: efficiency amid increasing demand and workforce shortages.
Foundation models can generate radiology reports, summarize charts, and organize routine tasks in adaptive workflows. The FDA has begun to address iterative software, yet there remains no established pathway specifically aimed at clinical copilots based on foundation models that perpetually learn while generating documentation across various conditions.
Elements of a deregulatory stance could be beneficial if applied judiciously.
The U.S. AI Action Plan suggests creating an AI evaluations ecosystem and regulatory sandboxes that facilitate swift but monitored testing in real-world environments, including healthcare. This aligns with the Healthcare AI Challenge, a collaborative endeavor supported by the MGB AI Arena that enables experts nationwide to assess AI on a large scale using multisite real-world data.
With FDA involvement, this concept can yield the evidence required by agencies and payers alongside the clinical utility evaluations that providers seek.
Some pre-market conditions may eventually be relaxed, although no measures have been enacted yet. If this happens, more accountability will shift to developers and deploying providers. This transition is only practical if providers possess effective tools and resources for local validation and monitoring, given that many are already overwhelmed.
Simultaneously, developers are introducing increasingly powerful models, and while some await a regulated, feasible pathway for clinical copilots, many are pushing experimental approaches into pilots or clinical research processes, frequently without suitable safeguards.
In terms of post-deployment regulation, I would advocate for more oversight.
Mandate local validation prior to going live, continuous post-market scrutiny such as the American College of Radiology’s Assess-AI registry, and routine reporting back to the FDA so that regulators can observe effectiveness and safety in practice, instead of mainly relying on seldom-utilized medical device reports despite known issues with generalizability.
Healthcare AI requires policies that broaden access to trusted and affordable computing resources, adopt AI monitoring systems and registries, enable sector-wide testbeds, and reward demonstrated efficiency that protects patients without hindering progress.
“`