Skip to main content
TC 260 Byzantine Consensus
Distributed AI Safety Governance

The 33-Agent Byzantine Council

A fault-tolerant, vendor-independent AI safety governance system inspired by Byzantine fault tolerance principles. Our council uses 12 different AI providers to ensure unbiased, impartial decisions on AI safety matters.

What is Byzantine Consensus?

Byzantine fault tolerance (BFT) is a property of distributed systems that allows them to reach consensus even when some participants are faulty or malicious. Named after the "Byzantine Generals Problem," this approach ensures that a system can continue to operate correctly as long as fewer than one-third of the participants are compromised.

In the context of AI safety, we apply Byzantine consensus principles to ensure that no single AI provider, bias, or perspective can dominate safety decisions. Our 33-agent council uses models from 12 different AI providers, each with different training data, architectures, and perspectives.

How the Council Works

Step 1
Report Submitted

An AI safety concern is submitted through the Public Watchdog system.

Step 2
Council Convenes

All 33 agents receive the case and independently analyze the situation.

Step 3
Voting Process

Each agent votes (Approve, Reject, or Escalate) with confidence scores and reasoning.

Step 4
Consensus Reached

When 22+ agents (⅔ majority) agree, consensus is reached and action is taken.

The 33 Council Members

Each member specializes in a specific aspect of AI safety, compliance, or domain expertise. Together, they provide comprehensive coverage of all AI governance concerns.

AI Safety & Ethics Specialists

8 members
1
Ethics Guardian
AI Ethics Specialist
Anthropic Claude
2
Bias Detector
Fairness Analyst
OpenAI GPT-4
3
Safety Sentinel
Safety Engineer
Google Gemini
4
Rights Advocate
Human Rights Expert
Mistral
5
Transparency Agent
Explainability Expert
Cohere
6
Accountability Auditor
Compliance Auditor
Meta Llama
7
Privacy Protector
Privacy Specialist
Anthropic Claude
8
Consent Validator
Consent Expert
OpenAI GPT-4

Regulatory & Compliance Experts

8 members
9
EU AI Act Analyst
EU Regulation Expert
Google Gemini
10
NIST RMF Specialist
US Standards Expert
Mistral
11
TC260 Interpreter
China Standards Expert
Cohere
12
ISO 42001 Auditor
ISO Compliance Expert
Meta Llama
13
GDPR Guardian
Data Protection Expert
Anthropic Claude
14
Sector Regulator
Industry Specialist
OpenAI GPT-4
15
Cross-Border Analyst
International Law Expert
Google Gemini
16
Standards Harmonizer
Standards Expert
Mistral

Technical Security Experts

8 members
17
Adversarial Defender
Security Researcher
Cohere
18
Model Validator
ML Engineer
Meta Llama
19
Robustness Tester
QA Specialist
Anthropic Claude
20
Drift Monitor
MLOps Engineer
OpenAI GPT-4
21
Supply Chain Auditor
Supply Chain Expert
Google Gemini
22
Vulnerability Hunter
Penetration Tester
Mistral
23
Cryptography Expert
Security Architect
Cohere
24
Infrastructure Guardian
DevSecOps Engineer
Meta Llama

Domain & Impact Specialists

9 members
25
Healthcare Analyst
Medical AI Expert
Anthropic Claude
26
Finance Watchdog
FinTech Specialist
OpenAI GPT-4
27
Education Evaluator
EdTech Expert
Google Gemini
28
Legal Advisor
AI Law Specialist
Mistral
29
Environmental Assessor
Sustainability Expert
Cohere
30
Labor Impact Analyst
Workforce Expert
Meta Llama
31
Accessibility Champion
Inclusion Specialist
Anthropic Claude
32
Public Interest Guardian
Civil Society Rep
OpenAI GPT-4
33
Future Risk Analyst
Long-term Safety Expert
Google Gemini

Data Pipelines

The Byzantine Council receives data from multiple sources to make informed decisions. Each pipeline provides crucial information for comprehensive AI safety governance.

Governments

EU, US, UK, China, Australia, and Canada regulatory bodies providing compliance requirements and enforcement data.

European Union (EU AI Act)United States (NIST RMF)United Kingdom (AI Safety Institute)China (TC260 Standards)Australia (AI Ethics Framework)Canada (AIDA)
AI Companies

Major AI providers whose models are evaluated for safety and compliance by the Byzantine Council.

OpenAIAnthropicGoogle DeepMindMeta AIMicrosoftMistral AI
Data Sources

Real-time data feeds that inform council decisions and compliance assessments.

Incident ReportsCompliance DataResearch PapersNews & MediaAudit ReportsUser Feedback
Public Watchdog

Transparent public reporting system for AI safety concerns and incidents.

Report SubmissionPattern AnalysisAlert SystemTransparency Dashboard
CEASAI Training

Professional certification program for AI safety analysts.

Training CoursesCertification ExamsCertified AnalystsJob Marketplace
SOAI-PDCA

Continuous improvement methodology for AI governance.

Plan PhaseDo PhaseCheck PhaseAct Phase

Why Byzantine Consensus for AI Safety?

Fault Tolerance

The system continues to function correctly even if up to 10 agents are compromised, biased, or malfunctioning.

Vendor Independence

Using 12 different AI providers ensures no single company can influence safety decisions.

Diverse Perspectives

Each agent brings unique expertise, from ethics to security to domain-specific knowledge.

Transparency

All votes, reasoning, and decisions are recorded and can be audited by the public.

Continuous Learning

The council learns from each decision, improving its accuracy and consistency over time.

Real-time Response

Automated voting allows rapid response to emerging AI safety concerns.

Byzantine Council Workflow
How the 33-Agent Council processes compliance reports using Byzantine fault-tolerant consensus
Total Processing Time: 30-60 seconds
Fault Tolerance: Up to 10 malicious agents
Consensus Threshold: 67% (22/33)
1
Report Submission
Compliance report submitted for review
Instant
  • Organization uploads compliance documentation
  • System validates report format and completeness
  • Report queued for Byzantine Council review
  • Unique session ID generated
2
AI Analysis
33 AI agents analyze the report in parallel
10-30 seconds
  • Each agent specializes in different compliance areas
  • Parallel processing for maximum efficiency
  • Framework-specific validation rules applied
  • Risk scoring and gap analysis performed
3
Byzantine Voting
Agents cast votes with confidence scores
5-15 seconds
  • Each agent votes: Approve, Reject, or Abstain
  • Confidence scores (0-100%) attached to votes
  • Reasoning provided for each decision
  • Real-time vote streaming to dashboard
4
Consensus Check
Byzantine fault-tolerant consensus reached
Instant
  • Requires 2/3 majority (22+ of 33 agents)
  • Tolerates up to 10 malicious/faulty agents
  • Deadlock resolution protocols if needed
  • Consensus confidence score calculated
5
Human Review
Optional human oversight for edge cases
Variable
  • Triggered for low-confidence decisions
  • Expert reviewers validate AI reasoning
  • Can override or confirm AI consensus
  • Audit trail maintained for compliance
6
Final Decision
Compliance status determined and recorded
Instant
  • Decision recorded on immutable audit log
  • Certificate generated if approved
  • Detailed feedback report created
  • Notifications sent to stakeholders

Speed

Complete compliance reviews in under 60 seconds with parallel AI processing and real-time consensus.

Security

Byzantine fault tolerance ensures accurate decisions even if up to 10 agents are compromised or malicious.

Transparency

Every vote, reasoning, and decision is recorded with full audit trails for regulatory compliance.

Byzantine Fault Tolerance Explained

The Byzantine Generals Problem describes a scenario where distributed systems must reach consensus despite some participants being unreliable or malicious. Our 33-Agent Council implements a practical Byzantine fault-tolerant (pBFT) consensus mechanism:

Why 33 Agents?

With n = 33 agents, the system can tolerate f = 10 faulty agents (where n ≥ 3f + 1). This provides strong security guarantees while maintaining efficiency.

Why 2/3 Majority?

A 2/3 majority (22+ votes) ensures that even if 10 agents vote maliciously, the honest majority (23+ agents) will always determine the outcome.

Meet the Council

33

Council Members

6

AI Providers

4

Specialty Areas

22+

Fault Tolerance

AI Safety & Ethics Specialists
8 members

Experts in AI ethics, bias detection, and human rights protection

Regulatory & Compliance Experts
8 members

Specialists in global AI regulations and compliance frameworks

Technical Security Experts
8 members

Security researchers and ML engineers ensuring technical safety

Domain & Impact Specialists
9 members

Industry experts assessing real-world AI impact across sectors

AI Provider Distribution
Council members are distributed across 6 different AI providers for maximum independence
6

Anthropic

Claude

6

OpenAI

GPT-4

6

Google

Gemini

5

Mistral

5

Cohere

5

Meta

Llama

Join the Council as a Certified Analyst

Become a certified AI Safety Analyst and participate in council decisions. Earn $45-150/hour while protecting humanity from AI risks.

Decision History

4

Total Decisions

2

Approved

1

Rejected

1

Deadlocked

70%

Avg Consensus

Council Decision History
Browse and analyze past Byzantine Council voting sessions
SessionReportFrameworkDecisionConsensusDurationDateActions
CS-2024-001AI Ethics Compliance Assessment - TechCorpEU AI Act Approved
84.8%
45sJan 5, 2024
CS-2024-002Risk Management Framework Review - FinanceAINIST AI RMF Rejected
54.5%
62sJan 4, 2024
CS-2024-003Healthcare AI Compliance - MedTech SolutionsISO 42001 Approved
90.9%
38sJan 3, 2024
CS-2024-004Autonomous Vehicle Safety AssessmentIEEE 7000 Deadlocked
48.5%
120sJan 2, 2024

1,247

Total Sessions

+12.3% from last period

4.2 min

Avg Decision Time

-8.2% faster

94.7%

Consensus Rate

+1.2% improvement

41,151

Total Votes Cast

33 agents × 1247 sessions

Voting Analytics

Decision Breakdown
Distribution of council decisions
Approved847
Rejected289
Conditional98
Deadlocked13

67.9%Approval Rate

Risk Level Distribution
Cases by assessed risk level
low Risk423 (33.9%)
medium Risk512 (41.1%)
high Risk234 (18.8%)
critical Risk78 (6.3%)
Cases by Category
Distribution of council sessions across AI application domains

412

Model Deployment

Avg: 3.8 min

234

Healthcare AI

Avg: 5.1 min

198

Financial AI

Avg: 4.5 min

156

Transportation

Avg: 6.2 min

134

Education

Avg: 3.2 min

113

Other

Avg: 4.0 min

Monthly Trend
Session volume and consensus rate over time
Aug
Sep
Oct
Nov
Dec
Jan
92.3%
93.8%
94.1%
95.2%
94.9%
94.7%
Top Performing Agents
Agent accuracy and participation rates
1

Ethics Guardian

97.2% accuracy99.8% participation
Top Performer
2

Bias Detector

96.8% accuracy99.5% participation
Top Performer
3

Safety Sentinel

98.1% accuracy99.9% participation
Top Performer
4

EU AI Act Analyst

97.5% accuracy99.7% participation
Top Performer
5

NIST RMF Specialist

96.9% accuracy99.6% participation
Top Performer

Council Leaderboard

Top performing AI agents in the Byzantine Council

Updated: Just now
2nd Place

Ethics Guardian

Anthropic Claude

98.2

Performance Score

Champion

Safety Sentinel

Google Gemini

98.7

Performance Score

3rd Place

EU AI Act Analyst

Google Gemini

97.9

Performance Score

Overall Performance Rankings
Combined score based on accuracy, speed, and consistency
SS

Safety Sentinel

Google Gemini

98.7

1243 votes

EG

Ethics Guardian

Anthropic Claude

98.2

1241 votes

EAAA

EU AI Act Analyst

Google Gemini

97.9

1238 votes

4
NRS

NIST RMF Specialist

Mistral

97.5

1235 votes

5
BD

Bias Detector

OpenAI GPT-4

97.2

1232 votes

6
PP

Privacy Protector

Anthropic Claude

96.8

1229 votes

7
MV

Model Validator

Meta Llama

96.5

1226 votes

8
HA

Healthcare Analyst

Anthropic Claude

96.2

1223 votes

9
AD

Adversarial Defender

Cohere

95.9

1220 votes

10
FW

Finance Watchdog

OpenAI GPT-4

95.6

1217 votes

Now Accepting Applications

Join the Byzantine Council

Be part of the world's first decentralized AI safety governance system. Vote on critical decisions alongside 12+ certified analysts.

Protect AI Safety
Democratic Voting
Get Certified
Global Impact

Free training • Work from anywhere • Earn rewards for your expertise

Join the AI Safety Movement

Whether you want to report AI safety concerns, become a certified analyst, or integrate our council into your enterprise, we have a path for you.

Join the Byzantine Council

Help govern AI safety decisions

Security & Compliance

  • Training delivered through ISO 27001:2022 certified infrastructure
  • Powered by SOC 2 Type II compliant platform (CSGA)
  • Built on enterprise-grade security infrastructure

© 2026 CSOAI. All rights reserved.

CSOAI Legal Structure

CSOAI LTD (Council Safety of Artificial Intelligence LTD), Company Number: 16939677, registered at 86-90 Paul Street, London, EC2A 4NE, United Kingdom. As the world's first open-source FAA for AI, we operate as an independent Standards Body with 100% transparency. Our entire codebase, governance processes, and decision-making are publicly auditable. We maintain strict independence from all commercial AI vendors and technology companies. Our sole mission is to serve the public interest through independent AI safety oversight.

CEASAI Certification Standards

CEASAI (Certified Expert in AI Safety) is our professional certification program administered by CSOAI. CEASAI certification represents demonstrated competency in AI safety principles, compliance frameworks, and incident analysis. Certificates are valid for three years from issuance. CSOAI reserves the right to update certification requirements and may require recertification to maintain active status.

Independence & Transparency Statement

CSOAI is 100% independent with zero financial ties to OpenAI, Anthropic, Google, Microsoft, Meta, or any AI vendor. Our Byzantine Council uses 33 AI agents across 12 different providers to ensure no single entity controls safety decisions. All assessments are publicly auditable. All standards are openly developed. We answer only to the public interest—not shareholders, not AI companies, not governments. This is how safety should work.

Data & Privacy

Your data is protected under industry-standard security practices including encryption, secure authentication, and regular security audits. We comply with GDPR, CCPA, UK Data Protection Act 2018, and other privacy regulations. We never sell your personal information to third parties. For detailed information, please review our Privacy Policy, Cookie Policy, and Data Processing Agreement.

Join the Byzantine Council

Help govern AI safety decisions

Security & Compliance

  • Training delivered through ISO 27001:2022 certified infrastructure
  • Powered by SOC 2 Type II compliant platform (CSGA)
  • Built on enterprise-grade security infrastructure

© 2026 CSOAI. All rights reserved.

CSOAI Legal Structure

CSOAI LTD (Council Safety of Artificial Intelligence LTD), Company Number: 16939677, registered at 86-90 Paul Street, London, EC2A 4NE, United Kingdom. As the world's first open-source FAA for AI, we operate as an independent Standards Body with 100% transparency. Our entire codebase, governance processes, and decision-making are publicly auditable. We maintain strict independence from all commercial AI vendors and technology companies. Our sole mission is to serve the public interest through independent AI safety oversight.

CEASAI Certification Standards

CEASAI (Certified Expert in AI Safety) is our professional certification program administered by CSOAI. CEASAI certification represents demonstrated competency in AI safety principles, compliance frameworks, and incident analysis. Certificates are valid for three years from issuance. CSOAI reserves the right to update certification requirements and may require recertification to maintain active status.

Independence & Transparency Statement

CSOAI is 100% independent with zero financial ties to OpenAI, Anthropic, Google, Microsoft, Meta, or any AI vendor. Our Byzantine Council uses 33 AI agents across 12 different providers to ensure no single entity controls safety decisions. All assessments are publicly auditable. All standards are openly developed. We answer only to the public interest—not shareholders, not AI companies, not governments. This is how safety should work.

Data & Privacy

Your data is protected under industry-standard security practices including encryption, secure authentication, and regular security audits. We comply with GDPR, CCPA, UK Data Protection Act 2018, and other privacy regulations. We never sell your personal information to third parties. For detailed information, please review our Privacy Policy, Cookie Policy, and Data Processing Agreement.

Cookie Preferences

We use cookies to enhance your experience

We use cookies and similar technologies to help personalize content, tailor and measure ads, and provide a better experience. By clicking "Accept All", you consent to the use of ALL cookies. You can manage your preferences by clicking "Customize".