July 2025 Some issues with Artificial Intelligence
Empire of AI should inform how we consider Executive Order: Winning the Race America's AI action Plan July 2025
Today, July 28, 2025, I finished listening to this book:
Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao
the audible description was:
“From a brilliant longtime AI insider with intimate access to the world of Sam Altman's Open AI from the beginning, an eye-opening account of arguably the most fateful tech arms race in history, reshaping the planet in real time, from the cockpit of the company that is driving the frenzy.”
If you are interested in this book, I would recommend you listen to/read the epilogue first which will help you understand the entire direction of the book. The book is both fascinating and disturbing. It is fascinating because of the advances made in computer science. It is disturbing because the author describes the variety of ethical and consequential decisions made by this private company. There is also a clear explanation of the reverberations of colonialism manifest in AI development as to human labor and environmental degradation, privacy, and transparency.
<><><>
Last week, the current administration in the White House issued a document called “Winning the Race: America’s AI action Plan July 2025.”
In an appropriate ironic act, I then asked chat GPT the following question and got the following response: Does this document include the principles of Privacy by Design and assure potential for the protection of the fundamental rights of individuals.
><><>< Chat derived analysis of my questions of the newly release AI Executive Order July 2025
While the America’s AI Action Plan sets out an ambitious roadmap for global dominance, it fails to uphold the foundational values of privacy, human rights, and democratic accountability. To build an AI future that is both powerful and just, policy must center individual dignity, transparency, and social equity—not only state and market power.
The “America’s AI Action Plan” does not meaningfully incorporate the core principles of Privacy by Design, nor does it robustly assure protection of the fundamental rights of individuals, based on the content of the America’s AI Action Plan
Fundamental Rights Protections
The document reflects a national security and economic competitiveness lens, with privacy and civil liberties mostly mentioned in passing (e.g., when referencing data sharing or scientific datasets).
There is no inclusion of rights-based frameworks such as the Fair Information Practice Principles (FIPPs), GDPR-style data protection, or algorithmic accountability to safeguard against discrimination or surveillance.
Summary: The document does not align with the Privacy by Design framework and falls short of assuring the protection of fundamental individual rights. Its priorities are centered on:
U.S. global dominance in AI
Deregulation to accelerate innovation
Security from adversarial actors
Privacy is largely absent or subordinated to these goals.
If you're conducting a formal policy review or public interest analysis, I can help develop a structured critique or recommendations based on privacy, human rights, and democratic accountability frameworks.
<><><><>
Here is a structured critique of the America’s AI Action Plan through the lens of Privacy, Human Rights, and Democratic Accountability, followed by recommendations to align future AI policies with these principles.
📌 Structured Critique
I. Privacy Protections – Insufficient Integration
Findings:
Privacy is not treated as a design imperative or baseline right.
The document emphasizes data use for innovation and national security with little regard for data minimization, consent, or user control.
Privacy by Design principles (e.g., default privacy, lifecycle protection, transparency) are missing or tokenized.
Implications:
Individuals’ personal data may be exploited without adequate safeguards.
Risk of surveillance creep, particularly through government procurement and defense applications.
Lack of privacy frameworks undermines trust in public-sector AI systems.
II. Human Rights – Weak Normative Anchoring
Findings:
The plan focuses almost exclusively on U.S. geopolitical dominance and economic power.
It lacks a rights-based framing—no references to civil liberties, anti-discrimination protections, or ethical AI principles.
Overt elimination of references to DEI (Diversity, Equity, Inclusion) and climate change removes guardrails that mitigate bias and structural harm.
Implications:
AI systems developed under this plan could deepen inequalities and harm marginalized groups.
Export of U.S. AI without human rights benchmarks risks contributing to global authoritarianism or surveillance abuses.
Undermines U.S. leadership in developing trustworthy and equitable technology.
III. Democratic Accountability – Minimal Transparency or Oversight
Findings:
While promoting transparency in evaluations and procurement, the plan lacks any public reporting requirements, participatory mechanisms, or independent audits.
Disempowers civilian oversight by framing public interest protections (e.g., FTC, state regulations) as “onerous red tape.”
No role for civil society, academia, or public representatives in shaping governance or evaluating impact.
Implications:
Federal AI systems may operate without external scrutiny or consent.
Government use of AI could evade public accountability, particularly in law enforcement, intelligence, and judicial contexts.
The plan centralizes power while reducing democratic checks and balances.
✅ Recommendations for Reform
A. Embed Privacy by Design and Default
Mandate privacy impact assessments for all federally funded AI systems.
Require all government-procured AI models to:
Use data minimization
Provide user access and correction rights
Maintain end-to-end encryption and robust lifecycle data protection
Incorporate Privacy by Design standards into the NIST AI Risk Management Framework.
B. Restore Human Rights Commitments
Reintroduce and expand requirements for:
Non-discrimination audits
Accessibility standards
Bias mitigation protocols
Commit to algorithmic fairness and explainability for all public-sector AI applications, including law enforcement, immigration, and healthcare.
Endorse the UN Guiding Principles on Business and Human Rights and integrate into AI export and development policy.
C. Strengthen Democratic Accountability
Establish an Independent Federal AI Oversight Agency with power to:
Audit government and corporate AI deployments
Receive and investigate complaints from the public
Issue binding guidance
Require public disclosure of:
AI models used by government agencies
Evaluations, procurement criteria, and training data sources
Create formal public participation channels (e.g., citizens’ panels, public comment periods) in AI policy formation and procurement.
D. Guard Against Surveillance and Abuse
Ban facial recognition and predictive policing systems unless rigorously evaluated for legality, accuracy, and civil liberties impact.
Enforce clear limitations on data fusion and cross-agency surveillance using AI.
Strengthen the FTC’s role in enforcing AI-related privacy and consumer protection violations.
E. Ensure International Leadership Reflects Democratic Values
Promote pluralistic, multilateral AI governance grounded in democratic values—not only national power.
Tie AI export support to:
Human rights due diligence
Data protection standards
Ethical use commitments by partner governments
🔚 Conclusion
While the America’s AI Action Plan sets out an ambitious roadmap for global dominance, it fails to uphold the foundational values of privacy, human rights, and democratic accountability. To build an AI future that is both powerful and just, policy must center individual dignity, transparency, and social equity—not only state and market power.
Elements of Privacy by Design
Proactive not reactive; preventive not remedial
Privacy as the default setting
Privacy embedded into design
Full functionality – positive-sum, not zero-sum
End-to-end security – full lifecycle protection
Visibility and transparency – keep it open
Respect for user privacy – keep it user-centric
From UC Berkeley Haas School - How Is AI Transforming Business?
According to Bloomberg, generative AI is anticipated to become a US$1.3 trillion market by 2032. A recent survey by New Vantage Partners found that 91.7 percent of top organizations are investing in Artificial Intelligence activities, and 54 percent of organizations using Artificial Intelligence reported cost savings and efficiencies in a study by IBM. With advancements in machine learning, automation, and natural language processing, AI, particularly generative AI, is revolutionizing traditional business models in unprecedented ways and is poised to drive a new wave of innovation and evolution across industries.
Artificial Intelligence offers a vast array of capabilities with virtually limitless potential, including automating repetitive tasks, providing predictive insights, enabling personalized customer experiences, optimizing supply chain management, and improving risk assessment. As generative AI continues to transform every aspect of modern business, organizations that embrace it will be empowered to unlock new opportunities, achieve operational efficiencies, and drive sustainable growth, gaining a competitive advantage.
https://em-executive.berkeley.edu/artificial-intelligence-business-strategies?utm_campaign_id=120225522448140474&utm_adset_id=120225522448160474&utm_placement=Facebook_Desktop_Feed&utm_ad_id=120225522448180474&utm_id=120225522448140474_v2_s05_e7241