This week, the Trump administration released an AI Action Plan, which included policy guidelines for artificial intelligence and guidance on how the President wants the United States to lead globally in the development of AI tools. The plan has three pillars: accelerating AI innovation, building more American AI infrastructure, and leading international AI diplomacy. President Trump also signed a trio of executive orders to drive action around the plan, including one designed to speed up the federal permitting process for new power plants and data centers and another that aimed at promoting the export of American technology. Unlike previous guidelines released by the federal government, the focus of this plan is on accelerating AI innovation rather than addressing concerns such as model safety, environmental risks and the potential for wealth concentration and job loss.
The plan, and an accompanying executive order, also focuses on "Preventing Woke AI in the Federal Government." The administration insists that any AI model procured by a federal agency must promote "ideological neutrality." While the administration is only able to set rules for the federal government and those it contracts with, the message to tech leaders as a whole is clear. It will be a fascinating test of President Trump’s bully pulpit to see whether and how big tech companies respond.
Given the news swirling about AI from a policy perspective, this is a good time to take a look at the most recent and important developments in AI, as well as to think about how the evolution of this critical technology will impact you. This memo includes our strategic perspective on the impact artificial intelligence is having on the private sector and guidance on how we believe organizations across the spectrum need to prepare and respond to the massive developments underway in AI.
Recent Developments
AI Impact Debated. OpenAI CEO Sam Altman used a visit to Washington this week to argue that AI is already making Americans more productive — and offered a promise to keep AI "democratic" by getting it in as many hands as possible. Altman’s comments came in response to public criticism by leaders across sectors warning that automation will threaten some white-collar and administrative roles, while increasing demand for AI-literate professionals. The CEOs of Ford, Anthropic, Shopify, and JP Morgan Chase among others have warned that AI will cause significant job losses in the coming years. Altman believes there is a "third path" between those who champion the unlimited potential for AI across every sector and those who view AI as a threat to society.
New AI Models. Over the past few months, several new versions of the most well-known AI models (OpenAI, Anthropic, Gemini) have been released to the public. Across the board, AI models now engage more naturally with users, offering potential in customer service, healthcare, and education. Sectors such as finance, logistics, and software development are deploying AI tools to improve efficiency and productivity as well. Companies including McKinsey, Walmart, and Goldman Sachs, are also moving quickly to develop their own models with varying capabilities and focus.
Government Application of AI. In addition to the plan released by the White House this week (see above), the Trump administration has made AI a core component of its overall governing strategy. The IRS, TSA, FAA, and Department of Defense are all using AI in core functions. Additionally, the Office of Management and Budget (OMB) released an AI Procurement Memo establishing requirements for agency AI procurement, including preferences for AI “developed and produced in the United States” and contract terms to protect government data. The OMB Memos, position the government’s AI policies as “forward-leaning, pro-innovation, and pro-competition mindset” so that agencies can become “more agile, cost-effective, and efficient.”
Rise of Agentic AI. Agentic AI, which represents a leap from reactive tools into proactive, autonomous systems where agents can reason, plan, and execute tasks independently, is quickly being deployed across sectors for customer service, marketing, healthcare, R&D and more. There are also systems being tested where autonomous agents cooperate and even negotiate with one another to achieve shared or conflicting objectives. Microsoft’s Copilot and Google’s Workspace tools are early examples of where agentic AI has been.
Training and Skills Development. Global data from LinkedIn suggests no group of workers has grown their AI skills more over the past year than those who serve in government. When comparing the 12 months ending April 2025 with the same period preceding it, the total number of AI-related skills listed on the profiles of government employees increased 28%. The healthcare industry came in second place with 22% growth. People employed at tech companies followed with a 16% uptick. Marketers and advertisers, meanwhile, ended up somewhere near the middle of the pack with a 12% increase in know-how pertaining to AI.
Infrastructure. There is growing concern about the need for additional AI infrastructure, most notably the creation of new data centers and energy sources. Four companies — Alphabet/Google, Microsoft, Meta and Amazon — expect to spend more than $300 billion this year on AI, while private investors and governments pour hundreds of billions more into AI infrastructure. Meanwhile, environmental advocates have raised concerns about the sustainability and long-term costs (financial and climate) created by the expansion of AI across sectors.
Growing Risk. Two respected nonprofits,The Future of Life Institute and Safer AI, are saying the top AI firms “had worrying gaps on existential risk in their plans,” and that “none of the companies has anything like a coherent, actionable plan” for controlling increasingly powerful systems. Anthropic scored highest on both reports with a C+.
Fairness and accountability. Academics and civil rights advocates are increasingly calling for formal policies to ensure the fairness and accountability of AI systems - to ensure bias doesn’t influence algorithms and data, and establishing clear lines of responsibility for AI systems' decisions and actions. The goal is to ensure that continued development of AI benefits society as a whole. Thus far, policymakers have avoided these issues in public discussions about AI regulation.
Looking Forward
We anticipate three trends will drive the continued development of AI in the short-term:
Proliferation of AI Agents: Expect widespread enterprise use of autonomous agents managing everything from supply chains to customer onboarding. They will not just execute predefined tasks but will make context-aware decisions, optimize workflows, and communicate across departments and systems without human intervention. For example, in chain management, agents will anticipate disruptions, negotiate with vendors, and reroute shipments based on live data. In customer and employee onboarding, they’ll personalize experiences, guide users through complex interfaces, and adapt to individual preferences, improving both efficiency and satisfaction.
Importantly, this transformation won’t be limited to tech companies. Healthcare providers will deploy agents to handle patient intake and diagnostics triage. Financial institutions will use them to monitor compliance, detect fraud, and deliver real-time financial advice. Public sector entities will rely on agents to streamline permitting, benefits administration, and emergency response.
Regulatory Fragmentation: The U.S., EU, and China are diverging. The EU AI Act, already in force, sets strict transparency and safety rules. U.S. policy is deregulatory. This regulatory fragmentation presents a major operational and strategic challenge. In the EU, companies must conduct detailed risk assessments for high-risk AI systems, maintain documentation on data provenance, and ensure human oversight in sensitive use cases like employment or law enforcement. In contrast, the U.S. has embraced a sector-by-sector approach, emphasizing innovation and voluntary frameworks over mandates - and individual states, such as California, are beginning to chart their own courses.
This divergence means global enterprises must now manage a patchwork of AI governance regimes, along the lines of how global operations navigate international tax codes or privacy laws like GDPR and CCPA. Engineering teams will need to build flexible architectures that can turn features or behaviors on and off based on jurisdiction. Legal and compliance functions must expand, often involving AI ethicists and technical auditors.
Public Scrutiny and Ethical Demands: As AI becomes more embedded in consumer services, hiring platforms, health systems, and financial decision-making, questions of fairness, accountability, and harm will dominate the public discourse. The public and media will continue pressing for transparent, explainable, and non-biased AI. Boards will expect answers. Organizations can no longer claim ignorance or deflect responsibility to third-party vendors or opaque algorithms. Every output generated by AI will be treated as a reflection of the company’s values and governance.
Recommendations
With careful investment (of both time and money) and strategic foresight, there are ways to utilize AI to gain a durable competitive advantage. Our recommendations are as follows:
Training and Staffing. Prioritize workforce development - both acquiring top talent and reskilling existing employees around AI. Recruitment should focus on cross-functional expertise. Beyond machine learning engineers, hire AI product managers, data translators, ethicists, and domain specialists who can bridge business and technology. Importantly, empower existing and new teams to adopt AI tools responsibly. Ensure guardrails are in place, from data privacy to model oversight. Invest in change management so that when AI alters workflows, roles, and decision-making structures, there is clear direction provided, and followed, across your organization.
Shaping the Broader Narrative. Develop and communicate a clear, authentic narrative about how your organization views, uses, and advances AI. Are you focused on augmenting human capabilities? Driving efficiency and scale? Solving complex societal problems? A version of that same vision should be communicated externally as well, through public communications, marketing, investor briefings, and customer messaging. Doing so will help to position organizations as a leading voice in the sector, allowing you to shape how audiences think about AI as it develops further. It is important that public leadership includes building public trust in AI as a priority - any confusion or concern from broad audiences will slow your ability to move quickly in advancing your AI efforts.
Developing Own Models / Training Models. While many organizations can succeed by fine-tuning or integrating existing AI models, some organizations will require the building or customizing of models for your own specific application. If your organization owns proprietary datasets - financial transactions, health records, user interactions, etc. - that data can fuel models that outperform generic alternatives. A commitment to integrating AI does require investment in internal machine learning infrastructure. This includes tools for model training, evaluation, and monitoring. Consider forming an internal “AI Lab” to explore foundational and frontier AI research relevant to your sector. Training your own models is resource-intensive, but when done right, it can create defensible IP, as well as deliver stronger performance and lasting differentiation.
Contributing to Public Policy. Play an active rule in shaping the policy frameworks that govern AI. Consider publishing principles that outline how you are ensuring fairness, privacy, and accountability in AI systems. Join alliances or consortia to influence regulations in alignment with innovation. Most importantly, be transparent about your efforts to build trust. Work with public officials to ensure that regulation and management of AI by local, state, and federal actors aligns with what is needed for you to be successful.
Interested in discussing these issues further? Contact us: info@onestrat.com