AI transparency and labeling policy
Swivver uses AI to support research, review, and monitoring workflows. This policy explains where AI helps and where human accountability remains required.
Last updated: March 3, 2026.
Role of AI at Swivver
AI is used to assist tasks such as claim extraction, draft refinement, factuality reporting, and continuous monitoring. AI is support tooling, not a replacement for accountability.
Origin labels
Story origin may be labeled as Human, AI-assisted, or AI-generated (labeled) where policy allows. Labels are intended to make content provenance more inspectable for readers.
Verification support
AI checks can flag potential conflicts and missing evidence links. These signals support review workflows but do not guarantee that content is fully accurate or final.
Monitoring and status updates
Stories may be re-checked as new developments appear. Labels such as Verified, Developing, and Disputed can update over time to reflect changing confidence.
Human responsibility
Human creators, editors, and moderators remain responsible for publishing decisions, corrections, disputes, and enforcement outcomes.
Policy updates and contact
We may update this policy as AI capabilities and safeguards evolve. Questions can be sent to contact@swivver.ai.
