At Divinci AI, we prioritize safe, ethical, and transparent AI solutions. Our products, including web and mobile applications, serve diverse use cases in healthcare and other sensitive fields. This document details our commitment to using only licensed data, ensuring rigorous safety measures, and implementing robust human interfaces in our Retrieval-Augmented Generation (RAG) and Fine-Tuned language models.
Licensed Data and Responsible AI Development
We are committed to using only licensed and ethically sourced data in training our AI models. Our data governance practices ensure that every dataset is verified for legitimacy, licensing compliance, and relevance. This approach aligns with guidelines like those set by ANSI for trustworthy AI, emphasizing transparency and ethical sourcing.
Human-Centered Safety and Moderation
Divinci AI integrates human moderation interfaces across all custom AI solutions to foster responsible use and prevent misuse. These interfaces support:
- Content Management: We provide tools for reviewing, editing, and controlling the information our AI models generate, helping to align responses with ethical standards.
- Testing and Validation: Every model goes through rigorous testing to minimize bias, confabulation, and misinformation.
- Release Management: Our models undergo controlled release processes to ensure they are only deployed once they meet our high safety and reliability standards.
Safety Features in Consumer and Patient-Facing Models
For applications involving consumers or patients, Divinci AI incorporates robust safety features such as:
- Transparency and Explainability: We strive for clarity in AI responses, helping users understand how decisions are made and why specific recommendations appear. This commitment reduces over-reliance and ensures user trust.
- Moderation for Harmful Content: We actively moderate against harmful outputs, including any content that may be biased, inappropriate, or potentially misleading. Our systems implement safeguards to monitor and restrict outputs that could negatively impact users.
Trust, Transparency, and User Accountability
Divinci AI fosters trust by creating transparent AI solutions. We provide clear user guidelines, outlining system capabilities, limitations, and ethical constraints. For healthcare applications, we follow ANSI’s standards to align our systems with trusted guidelines on data privacy, bias mitigation, and compliance with relevant regulatory standards.
AI Governance and Compliance
Our adherence to the AI governance framework aligns with standards like those from NIST and IEEE, focusing on accountability, transparency, and robust performance metrics. Regular audits of our systems ensure that our AI models remain aligned with Divinci AI’s ethical standards throughout their lifecycle.
Acknowledgment
We’d like to thank The Alan Turing Institute’s AI Standards Hub for providing invaluable AI standards resources that have inspired and informed Divinci AI’s AI Safety and Ethics policies.
Our principles
1. Human-centered design
- Human oversight: AI systems should augment human capabilities, not replace human judgment
- Transparency: Users should understand how AI systems make decisions that affect them
- Controllability: Humans must retain meaningful control over AI systems and their outcomes
2. Fairness and non-discrimination
- Bias mitigation: We actively work to identify and reduce bias in our AI systems
- Inclusive development: Our development process includes diverse perspectives and use cases
- Equal access: We strive to ensure our AI benefits are accessible to all users
3. Privacy and data protection
- Data minimization: We collect and process only the data necessary for system functionality
- User consent: Clear, informed consent for all data collection and processing
- Secure handling: Robust security measures to protect user data and privacy
4. Reliability and safety
- Rigorous testing: Comprehensive testing across diverse scenarios and edge cases
- Continuous monitoring: Ongoing assessment of system performance and safety
- Fail-safe mechanisms: Systems designed to fail safely when encountering unexpected situations
Technical safeguards
Model security
- Adversarial robustness: Protection against malicious inputs and attacks
- Output filtering: Multiple layers of content filtering and safety checks
- Version control: Strict versioning and rollback capabilities for all AI models
Quality assurance
- Red team testing: Dedicated teams attempt to find vulnerabilities and failure modes
- Evaluation frameworks: Comprehensive metrics for safety, fairness, and performance
- External audits: Regular third-party assessments of our AI systems
Deployment controls
- Staged rollouts: Gradual deployment with monitoring at each stage
- Circuit breakers: Automatic shutdown mechanisms for dangerous or unexpected behavior
- Human review: Critical decisions require human oversight and approval
Ethical guidelines
Development practices
- Inclusive teams: Diverse development teams with varied backgrounds and perspectives
- Stakeholder engagement: Regular consultation with affected communities and experts
- Impact assessment: Thorough evaluation of potential societal impacts before deployment
Use case restrictions
We prohibit the use of our AI systems for:
- Generating harmful, illegal, or abusive content
- Surveillance or monitoring without appropriate consent and legal basis
- Decision-making in high-stakes domains without human oversight
- Manipulation or deception of users
Data ethics
- Consent and transparency: Clear information about how data is used
- Purpose limitation: Data used only for stated, legitimate purposes
- User rights: Respect for user rights including access, correction, and deletion
Governance and oversight
Internal governance
- Ethics review board: Dedicated committee overseeing ethical implications of our work
- Regular training: Ongoing education for all team members on AI ethics and safety
- Clear policies: Documented procedures for handling ethical concerns and incidents
External collaboration
- Industry partnerships: Collaboration with other organizations on safety standards
- Academic research: Support for independent research on AI safety and ethics
- Regulatory engagement: Active participation in policy discussions and standard-setting
Incident response
- Rapid response: Quick identification and mitigation of safety issues
- Transparency: Public reporting of significant incidents and lessons learned
- Continuous improvement: Regular updates to policies and practices based on experience
Research and development
Safety research
We invest in fundamental research on:
- Alignment techniques to ensure AI systems pursue intended goals
- Interpretability methods to understand how AI systems make decisions
- Robustness testing to identify potential failure modes
Responsible innovation
- Precautionary principle: Careful consideration of potential risks before deployment
- Iterative development: Gradual improvement with safety considerations at each step
- Long-term thinking: Consideration of long-term societal implications
Transparency and accountability
Public reporting
- Annual safety reports: Regular public updates on our safety practices and performance
- Research publication: Sharing relevant research findings with the broader community
- Open dialogue: Engagement with stakeholders on safety and ethics concerns
User empowerment
- Clear explanations: Users understand how AI affects their experience
- Control mechanisms: Tools for users to customize AI behavior to their preferences
- Feedback channels: Easy ways for users to report concerns or suggestions
Compliance and standards
Regulatory compliance
We adhere to relevant regulations including:
- GDPR and other data protection laws
- AI governance frameworks in jurisdictions where we operate
- Industry-specific regulations for our enterprise customers
International standards
We align with international standards such as:
- ISO/IEC standards for AI systems
- IEEE standards for ethical design
- NIST AI Risk Management Framework
Continuous improvement
AI safety and ethics is an evolving field. We commit to:
- Regular review: Periodic assessment and updating of our practices
- Learning from others: Staying informed about best practices across the industry
- Adapting to change: Flexibility to address new challenges and opportunities
Contact us
For questions about our AI safety and ethics practices, or to report concerns:
Email: ethics@divinci.ai
Address: Divinci AI Ethics Team, 312 Arivona Ave, Santa Monica, CA 90401
We welcome feedback and are committed to addressing concerns promptly and transparently.
Last updated: January 20, 2025
Our AI safety and ethics commitments are fundamental to who we are as a company. We will continue to evolve these practices as we learn and as the field advances, always with the goal of creating AI that benefits humanity.Contributors