The goal of Safety Management is to enable students to utilize all of the powerful online tools at their disposal and give them independence, while still keeping them safe. We believe the combination of powerful machine learning algorithms along with a highly-skilled team will allow kids to utilize these tools within a safe environment.
You can define machine learning as the ability of a machine to generalize knowledge from data.
In our context, this means we can feed the machine learning system both (1) a stream of student-generated text or images along with (2) a clearly defined action taken by a Gaggle Safety Representative, and the system will learn what harmful images and text look like on their own. Then, we’ll be able to take that knowledge and generalize it to determine the proper action to be taken on fresh images and documents going forward.
Not surprisingly, the input into this system—the data—is of vital importance. Without reliable and accurate data, the machine learning algorithms will not be properly calibrated to identify the appropriate instances and determine the right action.
So how do we get this data?
Simply put, we use the power of human discernment and decision-making.
We have a team that is specially trained to identify instances of bullying, self-harm, pornography and other serious situations. Each case is reviewed at least twice before we take any action. We use the decisions made by our skilled team of Safety Representatives as the key input into our machine learning system, which takes our protection framework much deeper and makes it much more sophisticated than simply using keywords as an indicator of potential problems.
For instance, in a previous post, we discussed the limitations of simply flagging the word “suicide.” Based on incidents that we discovered during the first half a school year, only one in every 14,368 occurrences of the word “suicide” involved a student who was actually considering self-harm.
Of course, this doesn’t mean the decisions made by the model are correct 100% of the time. After all, determining whether a certain excerpt out of an email is a sarcastic joke or a serious threat is not always an easy distinction to make. Thus, model evaluation and consistent improvement is critically important. The system can constantly be recalibrated and improved by tracking model predictions against the decisions made by Safety Representatives. In other words, although the decisions made by the model won’t be correct 100% of the time, the level of accuracy is impressive, and consistent model evaluation will allow for continued updates and improvements to the model.
This machine learning system allows us to be more efficient, manage costs and, most importantly, more accurately detect when kids need help. The combination of this sophisticated system and our highly-skilled team allows us to keep kids safe and helps schools and parents understand issues their children may be facing and get them the help they need.What is Machine Learning? Click To Tweet