Decentralized AI On Blockchain Rivals OpenAI’s Lead

OpenAI’s SORA has captivated public attention, showcasing the transformative power of AI and its capacity to reshape our world once again. In the realm of cryptocurrency and blockchain, decentralized AI projects like Gensyn, OORT, and Bittensor are emerging to accelerate AI development by leveraging decentralization’s benefits—enhanced data privacy and cost savings. Leveraging blockchain technology and cryptographic economic incentives, decentralized AI encourages global participants to contribute computing power and data, fostering innovation and widespread adoption of AI technologies. This article will introduce one of the most fundamental protocols for decentralized AI: the Proof of Honesty (PoH). Specifically, it explores how to incentivize geo-distributed service providers (a.k.a., nodes) to contribute towards a globally optimal goal and verify decentralized resources (such as bandwidth, computing power, and storage space) to ensure they function as promised, aiming to establish a truly trustworthy AI.

PoH Incentivizes The Selfish Nodes To Act To Optimize The Social Goals

For efficiency and fairness, networks should prioritize users with higher utility needs, such as what we do in internet bandwidth allocation and cellular scheduling systems. Yet, optimizing resource allocation becomes challenging in decentralized environments due to the absence of comprehensive global information. From a game-theoretic perspective, nodes allocate resources based on local knowledge and self-interest, leading to suboptimal social outcomes and limited network performance. The PoH protocol, leveraging blockchain technology, incentivizes service providers with cryptocurrencies proportional to their network contributions. For example, in decentralized AI, the dataset of an AI model can be stored in multiple geographically distributed service providers. In other words, a service provider usually stores the pieces of various datasets from different users in the network. Assume a service provider receives a file-downloading request from one user in the network. When the user gets all pieces of the dataset from these service providers, the crypto reward will be proportionally shared by the service providers who contribute the dataset pieces. By doing so, the PoH consensus protocol incentivizes the service providers to cache frequently accessed dataset pieces in proximity and to deploy in a zone with high and reliable bandwidth. As a result, all service providers are independently working towards a globally optimal goal; that is, PoH optimizes the network topology and the resource allocation in a decentralized manner.

PoH Enables Decentralized Trustworthy Computing

Security and trust are paramount in the digital era, especially for AI computing tasks outsourced to an unknown service provider in a decentralized AI network. PoH is a vigilant guardian in AI computing, ensuring that outsourced tasks are completed accurately and honestly. It employs a mechanism inspired by law enforcement’s use of entrapment, discreetly incorporating test tasks, or “phishing tasks,” among real tasks. If a provider attempts to cut corners, the “PoH officers” (some service providers are assigned as officers to create and distribute phishing task in the network) catches them in the act using these phishing tasks. This deters dishonesty and secures the process without the need for constant oversight.

In particular, PoH embeds “phishing tasks” within regular tasks, making them indistinguishable from real tasks but with expected results known to the outsourcer. This strategy creates uncertainty among service providers about which tasks are tests, encouraging integrity to avoid detection and penalties. By keeping phishing tasks undisclosed, providers are motivated to consistently deliver their best effort, knowing that any task could be a test and that the consequences of being caught outweigh the benefits of cheating. The mathematical model foundational to PoH calculates the optimal number of phishing tasks to include, balancing security needs with computational efficiency. This model considers computing costs, the likelihood of dishonest behavior, and the impact of incorrect computations on overall system integrity. The aim is to maximize network security while minimizing unnecessary overhead, ensuring the system’s security and efficiency.

Democratizing AI Development

Though still in its infancy, decentralized AI promises to revolutionize how AI development occurs. By facilitating direct interactions between developers and users, bypassing the traditional gatekeepers of centralized authorities, it paves the way for a more democratic and trustworthy AI ecosystem.

This shift could dramatically alter the power dynamics within the AI market. As more vendors embrace decentralized AI, the dominance of proprietary models may wane, leading to a significant decrease in market control. Consequently, this paradigm shift is expected to usher in an era of increased transparency and inclusivity in AI development, marking a substantial step toward democratizing access to AI technologies.

Follow me on Twitter or LinkedIn