AI Ethics

Gaggle is dedicated to approaching artificial intelligence ethically, detecting and correcting any determinable bias of our AI. We know how critical it is to serve all students equitably and strive to ensure our technology is safeguarding students in a fair and appropriate manner.  

Evaluate, Constrain, Own (ECO™)

Gaggle commits to these fundamentals of AI ethics:

Evaluate Bias in the Input Data

Evaluate Bias in the Input Data

A minority’s output data distribution is a function of that minority’s input data distribution. Therefore, determining the minority distribution of the input data is vital to ensuring that algorithms make fair and equitable decisions.

Example: A customer may have a disproportionately smaller population of Black students. Determining the ratio of Black to non-Black student media sets the expected response of the decision algorithms’ output. 

Constrain Bias in the Output Data

Constrain Bias in the Algorithmic Output

Gaggle gives statistical voice to self-reported and underrepresented minorities by ensuring they are fairly represented in the decision algorithms. This equalizes and constrains overall output bias.

Example: Assume that a client could provide input data labeled by race. Gaggle would train algorithms so each race would have fair representation in the decision algorithms regardless of the client’s student demographics.

Own Unintentionally Unconstrained or Unobserved Bias

Own Unintentionally Unconstrained or Unobserved Bias

Undesired, unanticipated, and unintentional biased output is always a risk (via uncompensated input data bias or generative bias from the decision algorithms). Therefore, it is vital that Gaggle listens and responds to any suspected bias observed by our community.

Example: What would happen if Gaggle was made aware that a Black student’s email should have been flagged by our technology for concerning content but wasn’t?

In this instance, we would:

Validate receipt of the examples to the client and confirm that we owe a response.

Evaluate whether the Black/non-Black false-negative rate was statistically proportional to the Black/non-Black ratio assessed at the input or not. If not, the algorithm would require retraining and/or correction.

Expedite a response to the client detailing our technical assessment and action.

Gaggle is dedicated to detecting and compensating for any determinable bias either at the input or output of our AI. Any determinable output bias unintentionally undetected by us that is suspected and reported by the community we serve shall be investigated, validated, and acted upon.

“Our AI is a sentinel for children, finding those whose well-being is compromised. By committing to our AI-ECO principles, we ensure our AI qualifies the well-being of all students fairly and equitably.”

Diversity, Equity, and Inclusion

At Gaggle, a diverse, inclusive, and equitable workplace is one where all employees, contractors, consultants, and customers, whatever their gender, race, ethnicity, national origin, age, sexual orientation, identity, education, or disability, feel valued and respected. We are committed to a nondiscriminatory approach and provide equal opportunity for employment and advancement in all of our departments, programs, and worksites. We respect and value diverse life experiences and heritages and ensure that all voices are valued and heard. We’re committed to maintaining an inclusive environment with equitable treatment for all.

Suspect Bias?

Submit your evidence in this form. Our team will review it and get back to you.