3 Reasons Why GAFE’s Safety Features Aren’t Enough to Protect Students
When it comes to keeping students safe, Google Apps for Education (GAFE) has a couple of features that technology departments have at their disposal. In addition to providing the ability to restrict student email communications to users within the school or district, Google allows administrators to flag keywords so they can monitor students’ email messages based on word use.
In a similar post about Office 365, I discuss the limitations with locking down student email communications to users in a school or district. Here are three reasons for why flagging words in GAFE isn’t enough to protect your students.
Flagging Words Isn’t as Easy as it Sounds
Identifying suspicious items based on text is more difficult than you might imagine. Some districts flag the word “suicide,” for example, hoping that they will create opportunities to intervene with students considering self-harm.
While the intentions are good, flagging the term “suicide” is not nearly adequate.
During the first half of the 2015-16 school year, only one out of every five suicidal situations that we discovered included the word “suicide.” In fact, we have 19 other terms and phrases on record that suggest a student is engaging in, or considering, self-harm. It’s difficult to create a comprehensive list of all terms that might indicate a student is considering suicide, violence, drug and/or alcohol use and other concerning activities.
Maintaining a list of flagged words is also more difficult than it sounds. Our Blocked Words List requires constant revision to keep up with the ever-changing language and trends of K-12 students. In the past three months alone, over 75 additions and revisions were made to include fresh terminology and to refine existing terminology.
Blocking Words Doesn’t Block Pornographic Images
Flagging words doesn’t do anything about pornographic images, and contrary to what you might think, students do use their school-provided email accounts to send and receive pornography.
Google doesn’t provide a reasonable measure for blocking inappropriate images. Either students will have an unlimited ability to send and receive images, or images will need to be blocked as attachments entirely, which will severely limit their ability to do school work.
You Really Don’t Want to Rely on Yourself to Review Content
We have a dedicated team that reviews student email communications, files, attachments, links and much more. Even if a school or district attempts to write this into the job description of a staff or faculty member, they still have other responsibilities. What if the individual who handles flagged items is out of the office for a conference? What if they’re attending to other matters? What if it’s 2 a.m., and they’re asleep?
Simply put, the review of student content requires the full attention of a devoted staff. It’s unfair to your staff to expect they will be able to respond effectively to inappropriate student content immediately and around the clock.
One final point, returning to the example provided above, students appropriately use words like “suicide” in assignments and conversations all of the time. Based on incidents that we discovered during the first half of the 2015-16 school year, only one in every 14,368 occurrences of the word “suicide” will involve a student who is actually considering self-harm.
No administrator, counselor or educator should be responsible for finding such a small needle in such a large haystack. A list of flagged words alone doesn’t protect students as effectively as you might think.