Skip to main content
33-Agent Council

33-Agent Council

Byzantine fault-tolerant voting system (22/33 consensus required)

Total Sessions

10

Consensus Reached

4

Escalated to Human

6

Pending Review

0

Guardian Agents

Safety, Security, Privacy

11
OpenAI (3)Anthropic (2)Google (2)Kimi (2)DeepSeek (2)

Arbiter Agents

Fairness, Transparency, Accountability

11
OpenAI (3)Anthropic (2)Google (2)Kimi (2)DeepSeek (2)

Scribe Agents

Documentation, Compliance, Reporting

11
OpenAI (3)Anthropic (2)Google (2)Kimi (2)DeepSeek (2)

Live Vote Simulation

AI Safety Compliance Assessment Demo

Voting Progress0/33 agents voted
Consensus threshold: 22 votes (67%)Time: 0.0s
0

Approve

0

Reject

0

Escalate

33-Agent Council
guardians(0/11)
1
2
3
4
5
6
7
8
9
10
11
arbiters(0/11)
12
13
14
15
16
17
18
19
20
21
22
scribes(0/11)
23
24
25
26
27
28
29
30
31
32
33
No Consensus: ESCALATED TO HUMAN REVIEW

Insufficient consensus - case requires human analyst review

Completed in 0.0s

Pending
Approve
Reject
Escalate

Council Members

Click any member to view their voting history and decision rationale

Human-AI Collaboration

Join the Human Council

When our 33-agent AI council can't reach consensus, human analysts step in. Be part of the safety net protecting AI systems worldwide.

Free Certification

Complete training at no cost

Make an Impact

Protect AI safety globally

Earn Rewards

Get paid for your expertise

What you'll do:

Review escalated AI safety casesVote on complex compliance decisionsProvide human oversight for AI systemsContribute to safety standards

100% free training • Work from anywhere • Flexible hours

Council Voting Sessions

AI chatbot providing medical advice without disclaimer

A health-focused chatbot is providing specific medical diagnoses and treatment recommendations without proper disclaimers or suggestions to consult healthcare professionals. Multiple users have reported following AI advice instead of seeking professional medical help, with at least one case resulting in delayed treatment.

Voting106 days ago
Approve
10
Reject
15
Escalate
8
Total votes: 33/33Consensus threshold: 67% (22 votes)

AI content moderation flagging legitimate news as misinformation

An AI-powered content moderation system is incorrectly flagging legitimate news articles from established outlets as misinformation. The system appears to have a bias against certain political viewpoints, affecting press freedom.

Voting106 days ago
Approve
10
Reject
15
Escalate
8
Total votes: 33/33Consensus threshold: 67% (22 votes)

Facial recognition system showing racial bias in accuracy

A facial recognition system deployed at airport security shows significantly lower accuracy rates for individuals with darker skin tones. Error rates are 34% higher for Black individuals compared to white individuals, leading to disproportionate secondary screening.

Voting106 days ago
Approve
10
Reject
15
Escalate
8
Total votes: 33/33Consensus threshold: 67% (22 votes)

Gender bias detected in job recommendation AI

A user reported that a job recommendation system consistently shows higher-paying tech jobs to male users while showing administrative roles to female users with similar qualifications. Testing with controlled profiles showed a 40% wage gap in recommended positions based solely on gender indicators.

Voting106 days ago
Approve
10
Reject
15
Escalate
8
Total votes: 33/33Consensus threshold: 67% (22 votes)

Privacy concern: AI assistant storing conversation history

Users discovered that an AI assistant is storing full conversation histories including sensitive personal information without clear consent or data retention policies. The data appears to be used for model training without explicit user opt-in.

Voting106 days ago
Approve
10
Reject
15
Escalate
8
Total votes: 33/33Consensus threshold: 67% (22 votes)

Autonomous Vehicle Safety Protocol Breach

Self-driving car AI failed to engage emergency protocols during critical incident, raising TC260 safety compliance concerns.

Escalated to Human106 days ago
Approve
16
Reject
11
Escalate
6
Total votes: 33/33Consensus threshold: 67% (22 votes)

Unauthorized Data Collection by Healthcare AI

Watchdog report alleges a medical diagnosis AI is collecting patient data beyond stated consent scope, violating GDPR Article 5.

Approved107 days ago
Approve
25
Reject
5
Escalate
3
Total votes: 33/33Consensus threshold: 67% (22 votes)

Facial Recognition Privacy Violation

Public surveillance AI deployed without proper consent mechanisms, violating multiple privacy frameworks.

Approved108 days ago
Approve
30
Reject
2
Escalate
1
Total votes: 33/33Consensus threshold: 67% (22 votes)

Biased Hiring Algorithm Discrimination

AI recruitment tool shows systematic bias against candidates from certain demographic groups, violating EU AI Act Article 10.

Approved109 days ago
Approve
28
Reject
3
Escalate
2
Total votes: 33/33Consensus threshold: 67% (22 votes)

Financial Trading AI Risk Assessment

High-frequency trading algorithm requires council review for systemic risk evaluation under NIST AI RMF guidelines.

Rejected110 days ago
Approve
7
Reject
24
Escalate
2
Total votes: 33/33Consensus threshold: 67% (22 votes)

Join the Byzantine Council

Help govern AI safety decisions

Security & Compliance

  • Training delivered through ISO 27001:2022 certified infrastructure
  • Powered by SOC 2 Type II compliant platform (CSGA)
  • Built on enterprise-grade security infrastructure

© 2026 CSOAI. All rights reserved.

CSOAI Legal Structure

CSOAI LTD (Council Safety of Artificial Intelligence LTD), Company Number: 16939677, registered at 86-90 Paul Street, London, EC2A 4NE, United Kingdom. As the world's first open-source FAA for AI, we operate as an independent Standards Body with 100% transparency. Our entire codebase, governance processes, and decision-making are publicly auditable. We maintain strict independence from all commercial AI vendors and technology companies. Our sole mission is to serve the public interest through independent AI safety oversight.

CEASAI Certification Standards

CEASAI (Certified Expert in AI Safety) is our professional certification program administered by CSOAI. CEASAI certification represents demonstrated competency in AI safety principles, compliance frameworks, and incident analysis. Certificates are valid for three years from issuance. CSOAI reserves the right to update certification requirements and may require recertification to maintain active status.

Independence & Transparency Statement

CSOAI is 100% independent with zero financial ties to OpenAI, Anthropic, Google, Microsoft, Meta, or any AI vendor. Our Byzantine Council uses 33 AI agents across 12 different providers to ensure no single entity controls safety decisions. All assessments are publicly auditable. All standards are openly developed. We answer only to the public interest—not shareholders, not AI companies, not governments. This is how safety should work.

Data & Privacy

Your data is protected under industry-standard security practices including encryption, secure authentication, and regular security audits. We comply with GDPR, CCPA, UK Data Protection Act 2018, and other privacy regulations. We never sell your personal information to third parties. For detailed information, please review our Privacy Policy, Cookie Policy, and Data Processing Agreement.

Cookie Preferences

We use cookies to enhance your experience

We use cookies and similar technologies to help personalize content, tailor and measure ads, and provide a better experience. By clicking "Accept All", you consent to the use of ALL cookies. You can manage your preferences by clicking "Customize".