Responsible AI Principles

Responsible AI principles provide a framework for the ethical development and deployment of artificial intelligence (AI) systems. These principles aim to ensure that AI technologies are developed and used in ways that uphold fairness, transparency, accountability, and respect for human rights and dignity. By adhering to these principles, organizations can mitigate risks associated with AI, foster trust among stakeholders, and promote beneficial outcomes for society.

Establish clear governance, roles, and processes for overseeing AI ethics.

Ensure AI systems are unbiased, inclusive, and undergo rigorous bias testing.

Strive for transparency in AI decisions and operations, supported by clear documentation and human oversight.

Embed data protection measures and robust security practices into AI initiatives.

Design AI systems with consideration for human values, impacts on society, and the environment.

Conduct thorough testing to ensure AI operates safely and reliably as intended.

Foster collaboration across disciplines and engage stakeholders to address diverse perspectives and concerns.

These principles guide organizations in developing AI technologies that not only perform effectively but also align with ethical standards and societal expectations. By integrating these principles into AI initiatives, businesses and developers can contribute to a future where AI benefits everyone while minimizing risks and ethical concerns.

ICS Compute Responsible AI Policy Last Updated: [25/06/2024]

This ICS Compute Responsible AI Policy (“Policy”) governs the utilization of artificial intelligence and machine learning services, functionalities, features, and third-party models (collectively referred to as “AI/ML Services”) offered by ICS Compute.

Prohibited Uses: You must refrain from utilizing, facilitating, or permitting others to use the AI/ML Services for the following purposes:

  • Intentional dissemination of disinformation or deception;
  • Violation of individuals’ privacy rights, including unlawful tracking, monitoring, and identification;
  • Depicting an individual’s voice or likeness without their consent or appropriate rights, encompassing unauthorized impersonation and non-consensual sexual imagery;
  • Harming or exploiting minors, including grooming and child sexual abuse;
  • Harassing, harming, or encouraging harm to individuals or specific groups;
  • Intentionally circumventing safety measures and functionality or prompting models to act in violation of our Policies;
  • Performing lethal functions in weapons without human authorization or control.

Responsible AI Requirements: If you choose to employ the AI/ML Services for consequential decision-making, you are obligated to evaluate the potential risks associated with your specific use case and implement suitable human oversight, testing, and other tailored safeguards to mitigate these risks. Consequential decisions encompass those that significantly affect an individual’s fundamental rights, health, or safety (such as medical diagnosis, judicial proceedings, access to critical benefits like housing or government assistance, educational opportunities, employment decisions, access to lending/credit, and providing legal, financial, or medical advice). Upon request, you agree to furnish details regarding your intended uses of the AI/ML Services and compliance with this Policy.

You and your end users are exclusively accountable for all decisions made, advice given, actions taken, and failures to act based on your use of the AI/ML Services. The AI/ML Services leverage machine learning models that generate predictions derived from data patterns. Output generated by these machine learning models is probabilistic, and generative AI may produce inaccurate or inappropriate content. It is imperative that you assess the outputs for accuracy and appropriateness relative to your specific use case.