Governance from within
A series of thought-provoking topics to awaken the industry for the need of self-governance
I’ve been researching the impact of AI on society’s information and decision-making processes for a while. The implications span across education, politics, news, and brand marketing, presenting vast challenges we must address.
I believe AI safety is a socio-technological challenge that the industry and the public should embrace together. From a governance standpoint, while I want to trust the tech industry to handle external threats (often of their own making), we must also address vulnerabilities within society's fabric.
We also need more resources and people focused on our epistemic security. The Alan Turing Institute defines an epistemically secure society as one that reliably averts threats to the processes by which reliable information is produced, distributed, acquired, and assessed within the society.
We need clear governance frameworks created by and for the industry. Policymakers are struggling to keep pace, and with the rapid development of AI, relying on the slow mechanisms of government is insufficient. Governance must come from within.
My primary concern is the disempowerment of human decision-makers, which can occur in many ways. In the upcoming chapters, I will explore:
· The Risks and Implications of AI-Enabled Persuasion: The need for a governance framework by each industry actor.
· Principles of AI Governance: From the largest corporations to the smallest operations.
· Ethical Attention: The argument for respecting and enabling consumer agency.
· Beyond Manipulation: Deception and the fragility of trust.
· AI Literacy as the Best Defence: the evolution of work through learning.
As I work through these topics, I welcome feedback and further references.
My goal is to develop a governance approach that will help society adapt to the imminent tsunami of change.