The rise of artificial intelligence (AI) has undoubtedly brought about numerous advancements and improvements in recent years. However, it has also intensified the discussions and challenges surrounding data, particularly with the emergence of Large Language Models (LLMs). In this insightful guest essay, Alastair Moore, Founding Partner at The Building Blocks and a long-time EV member, prompts us to consider how we can effectively govern and control AI systems as they continue to enhance their performance.
The topic of data and its societal implications is not a new one. Thinkers like Jaron Lanier have long emphasized the rights of users over the data they generate. Shoshana Zuboff has delved into the concept of surveillance capitalism, while Glen Weyl has introduced fresh perspectives on data ownership. With the current evolution of AI, their insights have become even more relevant as we navigate the intricate intersection of technology, data, and society.
Moore highlights that the era of Web3, characterized by decentralized and transparent technologies, is approaching. He believes that Web3 offers a potential path to address some of the key challenges faced by AI as it continues to scale across our technological systems. In this article, he focuses on two specific challenges and potential solutions: the relationship with creators whose information and data are used to train AI systems, and the alignment of multi-agent systems.
Moore provides examples that illustrate how the integration of AI with Web3 can introduce new tools that foster better control, accountability, and trust in AI systems. These tools play a crucial role in ensuring the responsible use of AI.
One of the challenges related to AI is the issue of data ownership. Determining how data used to train AI systems should be attributed and owned has been a longstanding and increasingly complex problem. Moore argues that in the future landscape where human and AI-generated media coexist, it is essential for creators to be recognized, attributed, and compensated for their contributions. They should receive proper credit, including royalty payments, when their work is utilized as input for AI systems.
To address this challenge, Moore references recent research by Professor John Collomosse at Surrey University and Adobe. They introduce EKILA, a decentralized framework that enables creatives to receive recognition and rewards for their contributions to media derived from AI-based Generative Adversarial Networks (GANs). EKILA proposes a robust visual attribution technique and combines it with an emerging content provenance standard called the Coalition for Content Provenance and Authenticity (C2PA). This framework aims to identify the generative model and training data responsible for AI-generated imagery. Additionally, EKILA extends the non-fungible token (NFT) ecosystem by introducing a tokenized representation for rights, enabling a triangular relationship between the asset’s ownership, rights, and attribution.
By integrating AI with Web3 technologies and implementing frameworks like EKILA, Moore suggests that we can achieve better control, accountability, and trust in AI systems. These tools can support the responsible and ethical use of AI moving forward.
In conclusion, Moore’s essay calls attention to the importance of governing and controlling AI systems as they continue to improve in performance. He emphasizes the relevance of Web3 technologies in addressing the challenges associated with AI, particularly in terms of data ownership and multi-agent system alignment. By leveraging these technologies, we can ensure the responsible and beneficial development of AI in society.