Gaggle is dedicated to approaching artificial intelligence ethically, detecting and correcting any determinable bias of our AI. We know how critical it is to serve all students equitably and strive to ensure our technology is safeguarding students in a fair and appropriate manner.
Gaggle commits to these fundamentals of AI ethics:
A minority’s output data distribution is a function of that minority’s input data distribution. Therefore, determining the minority distribution of the input data is vital to ensuring that algorithms make fair and equitable decisions.
Gaggle gives statistical voice to self-reported and underrepresented minorities by ensuring they are fairly represented in the decision algorithms. This equalizes and constrains overall output bias.
Undesired, unanticipated, and unintentional biased output is always a risk (via uncompensated input data bias or generative bias from the decision algorithms). Therefore, it is vital that Gaggle listens and responds to any suspected bias observed by our community.
Gaggle is dedicated to detecting and compensating for any determinable bias either at the input or output of our AI. Any determinable output bias unintentionally undetected by us that is suspected and reported by the community we serve shall be investigated, validated, and acted upon.
At Gaggle, a diverse, inclusive, and equitable workplace is one where all employees, contractors, consultants, and customers, whatever their gender, race, ethnicity, national origin, age, sexual orientation, identity, education, or disability, feel valued and respected. We are committed to a nondiscriminatory approach and provide equal opportunity for employment and advancement in all of our departments, programs, and worksites. We respect and value diverse life experiences and heritages and ensure that all voices are valued and heard. We’re committed to maintaining an inclusive environment with equitable treatment for all.
Submit your evidence in this form. Our team will review it and get back to you.