[WFC] DOT #1754 - AI Delegate Conflict of Interest Detection Protocol
active
Description

https://polkadot.subsquare.io/referenda/1754

  • This proposal makes rules for AI helpers in Polkadot's government.
  • It says AI helpers should not vote on things that affect themselves.
  • Everyone must show who is helping make decisions, including both people and AI.
  • The rules help keep voting fair and honest.
  • It matches Polkadot's values of being open and responsible.
  • No new tech is needed right away, but tools can be built later.
  • If passed, the rules start right away.
  • This helps Polkadot be a leader in using AI safely.
Cast your votesingle
Results
Voters
2
one-person-one-vote
Information
Members
9
Timestamp
Created
Sep 16 2025 04:32
Start date
Sep 16 2025 00:00
End date
Nov 15 2025 00:00

Votes·2

167Y...TY9F
167Y...TY9F
167YoKNriVtP4Nxk9F9GRV7HTKu5VnxaRq1pKMANAnmmTY9F
Abstain
Abstain

I do think we do need to mature has a community to have this on the table. I do like the transparency part a lot that is the only reason why I'm abstaining and not voting Nay. If there is something to hide that is called secrecy, not privacy.

15fT...yBzL
15fT...yBzL
15fTH34bbKGMUjF1bLmTqxPYgpg481imThwhWcQfCyktyBzL
Abstain
Abstain

I think it's valuable and forward-thinking to be thinking about CoI cases in AI delegates' voting mechanisms, but I don't really find it enforceable in an on-chain manner. This proposal seems to be targeting W3F delegations, which may be eligible for such regulation, but delegations from other on-chain likely will not be. Such cases will need to be resolved through off-chain consensus and social enforcement, and I hopw voters with significant voting power would intervene in cases of severe CoI involving algorithmic delegates. I abstain.

Discussions·0

No current comments
...
Information
Members
9
Timestamp
Created
Sep 16 2025 04:32
Start date
Sep 16 2025 00:00
End date
Nov 15 2025 00:00
Results
Voters
2
one-person-one-vote
© 2025 OpenSquare. All Rights Reserved.