Gaggle is dedicated to approaching artificial intelligence ethically, detecting and correcting any determinable bias of our AI. We know how critical it is to serve all students equitably and strive to ensure our technology is safeguarding students in a fair and appropriate manner.
A minority’s output data distribution is a function of that minority’s input data distribution. Therefore, determining the minority distribution of the input data is vital to ensuring that algorithms make fair and equitable decisions.
Example: A customer may have a disproportionately smaller population of Black students. Determining the ratio of Black to non-Black student media sets the expected response of the decision algorithms’ output.
Gaggle gives statistical voice to self-reported and underrepresented minorities by ensuring they are fairly represented in the decision algorithms. This equalizes and constrains overall output bias.
Example: Assume that a client could provide input data labeled by race. Gaggle would train algorithms so each race would have fair representation in the decision algorithms regardless of the client’s student demographics.
Undesired, unanticipated, and unintentional biased output is always a risk (via uncompensated input data bias or generative bias from the decision algorithms). Therefore, it is vital that Gaggle listens and responds to any suspected bias observed by our community.
Example: What would happen if Gaggle was made aware that a Black student’s email should have been flagged by our technology for concerning content but wasn’t?
In this instance, we would:
Validate receipt of the examples to the client and confirm that we owe a response.
Evaluate whether the Black/non-Black false-negative rate was statistically proportional to the Black/non-Black ratio assessed at the input or not. If not, the algorithm would require retraining and/or correction.
Expedite a response to the client detailing our technical assessment and action.