GA-CCRi Analytical Development Services

Statement of Ethical Principles

GA-CCRi helps our customers make big decisions, often within a large and complex ethical context. Below is GA-CCRi’s statement of principles, which provides a framework for thinking through ethical problems as part of our design and development.

Our Principles

GA-CCRi prioritizes the following ethical principles:

  1. We must build systems that allow operators to exercise appropriate levels of human judgment:
    1. System design should ensure that outputs, under all circumstances, are consistent with expectations and restricted to the scope of the contract.
    2. As this applies to work in defense, it will be applied within military codes of conduct and published US government policies.
    3. This includes creating fail-safes so that users can stop a system before it gets out of control.
    4. It also includes keeping the burden of safety–not relying on “fine print” as a fail-safe (that is, not relying solely on the user to protect against unintended consequences).
  2. Individual privacy is a right:
    1. When using large datasets, we will anonymize the data to remove any Personally Identifiable Information (PII), or if that is not feasible then we will take other steps to ensure individual privacy.
    2. We will take extra precautions when anonymized data could still be used to identify an individual.
    3. There may be situations in which the right to privacy is waived, such as when
      1. A user has provided the information publicly.
      2. An individual has explicitly or implicitly waived the right to privacy.
      3. The situation concerns issues of national security.
  3. All efforts should be made to treat different groups of people with equal levels of respect:
    1. To that end, all stakeholder groups should be represented on the team of engineers and creators of systems.
  4. Users should be given meaningful information and empowering choices:
    1. Users must be allowed to opt out of research studies.
    2. Users must be informed of limitations of the tools they are using.
    3. Information must be summarized enough to be useful, but detailed enough not to be misleading.
    4. Stakeholders should understand the systems that they are using.
      1. We will create “Explainable AI”–we will avoid black boxes, where possible, and seek to expose relevant inner workings of models and systems.
  5. Following legal guidelines is a useful minimum threshold for actions:
    1. We acknowledge that the law may not keep pace with technology, and we will hold ourselves to the standards we believe in, even if the law has not caught up.
    2. We maintain our right to say no to any client or stakeholder, even in the midst of executing a contract or Statement of Work, if the client or stakeholder asks us to perform work that is illegal or unethical.