Gaggle Blog

Keeping Humans and Machines in the Loop

Written by Steve Simpson | Feb 10, 2017 1:17:31 PM

In an earlier post from my colleague Derek Jedamski, we described the basic workflow of how we put machine learning to use to make online collaboration and learning a safer place for students.

We finished up that post mentioning that machine learning models aren’t always correct 100% of the time, because human language is quite nuanced: e.g. a sarcastic joke can sound like a serious threat without proper context and experience. To discern the difference between a joke and a threat is a uniquely human ability, which is why Gaggle Safety Management uses both machine learning and trained Safety Representatives to provide the maximal accuracy in judgment.

Such a system is known as “human-in-the-loop,” a branch of machine learning becoming increasingly more common. Top companies are finding this type of system indispensable to providing top-quality services: examples range from Facebook with virtual assistants and fake news detection, to Pinterest filtering of hateful or undesired pins, to even the general workflow of how people use Google.

If human intuition and experience are what is needed ultimately to make the final call, then that same standard also constitutes the benchmark of success for machine learning. Though machine learning is not going to be a replacement for humans at these higher level interpretations, it might very well do an excellent job of finding cases worthy of a human evaluator’s time and putting those cases into a curated, prioritized list.

Thus, our system is built to let each side do what they do best: Humans have all the intuition and experience to make the tough calls and supervise the system overall. Machines excel at doing and logging repetitive tasks filled with a rote application of general rules.

[bctt tweet="Keeping Humans and Machines in the Loop" username="Gaggle_K12"]