What Not to Post Into ChatGPT As A Teacher: Some Thought for Educators and School Administrators
- The White Hatter
- 47 minutes ago
- 8 min read

Artificial intelligence tools like ChatGPT are becoming valuable aids in education. Last year we witnessed first hand teachers and school administrators using AI to draft lesson plans, generate discussion questions, adapt materials for different learning levels, and for internal communications. While these tools can save time and spark creativity, it’s important to remember that what you type into ChatGPT matters. Not everything belongs in a prompt!
Using ChatGPT, or any AI platform, responsibly is about more than avoiding risks, it’s about modelling good digital practices, protecting trust, and showing students and families that technology can be integrated into education thoughtfully and with intention. Below are key categories of information that we recommend you should avoid entering into ChatGPT, along with safer alternatives.
1. Personally Identifiable Information (PII)
It can be tempting to enter student specific details into ChatGPT to get tailored results, but this poses serious privacy risks. Information such as student names, addresses, phone numbers, birthdays, or ID numbers can leave children vulnerable if misused. Even something as small as including a student’s full name in a prompt could be enough to expose them to identity theft or unwanted digital tracking. In the case of images, such as student photos, posting them without explicit parental or caregiver consent could violate both ethical standards and legal obligations. Think of PII as a “do not share” category, keep prompts general and anonymized to protect student safety.
Safer prompt alternative: Keep prompts broad. Instead of “Write a recommendation letter for Sarah Thompson in Grade 9,” try, “Write a generic recommendation letter for a high school student excelling in science.” This still produces useful results without exposing identities.
2. Sensitive Student Records
Grades, report card comments, Individualized Education Plans (IEPs), and disciplinary notes are considered highly confidential. These records often fall under strict provincial privacy laws in Canada. Entering them into ChatGPT could not only be a privacy violation but could also erode trust between families and schools. For example, sharing an IEP in a prompt might feel like a quick way to draft classroom accommodations, but it risks exposing sensitive information that is legally protected. Instead, describe the need in general terms when prompting such as, “create strategies to support a student with attention challenges”, without referencing personal details.
Safer prompt alternative: Focus on general needs. Rather than copying an IEP into ChatGPT such as, “What are some classroom strategies that support a student with attention challenges?” You get practical insights without risking a breach of confidentiality.
3. Confidential Staff Information
ChatGPT should never be a place to process internal staff matters. Inputting things like HR evaluations, performance notes, or sensitive email exchanges with colleagues creates unnecessary risks. Such information is meant to remain private between administrators, HR, and the individuals involved. Accidentally exposing these details, even in anonymized form, could harm professional relationships and trust within the school community. A good rule of thumb is that if you wouldn’t want your comment about a colleague shared publicly, it doesn’t belong in a ChatGPT prompt.
Safer prompt alternative: If you need help, use generic prompts such as, “Draft a neutral email to staff announcing a new professional development opportunity.” This keeps communication professional without exposing internal matters.
4. Unpublished Assessments
Posting test questions, answer keys, or exam drafts into ChatGPT can undermine the fairness and integrity of your assessments. Once content is entered, there is always the possibility that a similar version may become accessible to students using AI themselves. Imagine a student typing in “Grade 9 math test practice questions” and accidentally pulling material that matches your unpublished exam, it’s not worth the risk. Keep your assessment content offline and instead use AI to generate general practice problems or ideas for alternative assessments.
Safer prompt alternative: Use ChatGPT for practice or study materials instead. Here’s an example, “Generate 10 algebra practice questions for Grade 9 students.” This keeps your secure exams private while still giving you fresh resources to support learning.
5. Copyrighted or Licensed Material
Many schools invest in textbooks, paid articles, or worksheets that are licensed for educational use. Copying and pasting entire sections of these resources into ChatGPT could violate copyright protections. While summarizing or paraphrasing material is generally fine, uploading full passages crosses into legally risky territory. For example, entering a full chapter from a textbook into ChatGPT to create a study guide may unintentionally breach your license agreement with the publisher. Stick to referencing ideas or summarizing concepts rather than reproducing copyrighted material in full.
Safer prompt alternative: Ask ChatGPT to work with concepts, not full copies such as, “Summarize the key themes of a chapter on ecosystems” instead of pasting the chapter itself. This approach respects copyright while still producing useful teaching material.
6. Sensitive Scenarios Without Context
Prompts like “Write a discipline letter for a student caught cheating” or “Draft a suspension notice” can be problematic if they relate to actual incidents. Even if names are removed, the details could mirror a real-life case, making it identifiable to someone familiar with the situation. Without context, AI-generated responses might also come across as overly harsh or legally inappropriate. If you want help with phrasing disciplinary communication, frame the scenario as a purely hypothetical exercise. Use a prompt such as, “Write a generic sample letter to a parent about academic dishonesty”, rather than tying it to a real incident.
Safer prompt alternative: Frame scenarios as hypothetical such as, “Create a generic sample letter to parents about academic dishonesty.” This provides templates you can adapt without exposing a real student’s situation.
7. Login Credentials or Access Keys
Perhaps the most straightforward but critical rule. Never type usernames, passwords, or access keys into ChatGPT. These credentials are sensitive security assets, and there’s no scenario where the tool requires them to generate an answer. For example, entering your school portal password into a prompt while troubleshooting login issues exposes you to unnecessary security risks. If you need tech help, go directly through your district’s IT team or official support channels. Treat login credentials as sacred, never to be shared with AI platforms.
Note: Remember the good news, AI never needs login credentials to provide useful support. You can get help with phrasing, lesson ideas, or communication drafts without ever sharing secure information.
Some educators may ask, “Doesn’t ChatGPT anonymize or delete data?” While platforms like ChatGPT have safeguards, there’s no guarantee that prompts won’t be stored, processed, or used in future model training. Even anonymized data could become identifiable in the wrong context. Following a “better safe than sorry” principle protects not only student privacy, but also professional integrity. Here are some practical tips for the safer use of ChatGPT for educators and school administrators:
Think of Prompts as Public
A good mental habit is to treat every prompt you type into ChatGPT as if it could eventually be read by someone outside your classroom. While the platform has safeguards in place, it’s still best to assume nothing is ever fully private once typed online. This mindset helps prevent the accidental sharing of student or staff information. For example, instead of entering, “Write a recommendation letter for Sarah Thompson in Grade 9,” reframe the prompt as, “Write a generic recommendation letter for a high school student excelling in science.” Keeping prompts broad ensures you benefit from the tool without compromising trust or privacy.
Use Anonymized Data
When you do need to reference classroom situations, replacing specifics with generalities is a safer and effective practice. For example, instead of saying, “Johnny, a 10-year-old with ADHD, needs classroom strategies,” you might prompt, “What are some classroom strategies that support a student with attention challenges?” This way, you still get practical suggestions tailored to your teaching needs without disclosing private information about a real child. Using stand-ins like, “Student A” or “a Grade 7 learner” allows you to maintain accuracy while respecting confidentiality.
Check School Policies
Every school district or educational institution may have its own guidelines for how AI tools should be used. Some may allow teachers to experiment freely, while others may require specific training or prohibit entering student related data altogether. Familiarizing yourself with these rules protects both you and your students. For example, a district policy might clarify whether teachers can use AI to draft lesson content but restrict its use for assessment design. By knowing your institution’s stance, you ensure your practices align with legal, ethical, and professional expectations.
Model Safe Use for Students
Students watch closely how their teachers use technology, and your behaviour sets the tone for their own. By demonstrating safe prompting practices, you show students how to engage responsibly with AI. As an example, you might explain to your class why you never include full names in your prompts, or you could model how to phrase a safe question such as, “Give me tips for writing a persuasive essay” rather than, “Write my essay for me on climate change.” This not only protects them but also teaches digital literacy skills they will carry beyond the classroom. When students see educators practicing thoughtful, cautious use, they learn that AI can be a helpful tool without crossing ethical or privacy lines.
ChatGPT can be a powerful teaching partner when used responsibly. The key is to treat it like any other online platform by share ideas, not identities. By keeping sensitive details out of prompts, educators can leverage the benefits of AI while protecting their students, colleagues, and themselves.
One more important consideration, if your school is planning to integrate AI, we strongly recommend hosting a dedicated parent/teacher night to explain the “who, what, where, when, why, and how” of the platforms being used. There is a lot of misinformation and disinformation surrounding AI that is flooding social media by groups who are against schools using AI in any capacity. Transparency is key when introducing any new technology into the classroom, and parents and caregivers deserve to understand not just that AI is being used, but how it will impact their child’s education.
During such an evening, schools can walk families through the specific platforms that will be adopted, the purpose they serve in supporting learning, and the safeguards in place to protect student privacy. For example, teachers might explain how AI will be used to draft practice questions or generate creative prompts, while clarifying that sensitive student data will never be entered into the system. Administrators can outline the district’s official policies, demonstrate practical examples of safe use, and give parents the opportunity to ask questions directly.
This type of event also helps build trust. Parents and caregivers often feel uncertain about new technologies because of what they hear in the media or online. A face-to-face conversation provides space to address concerns, correct misconceptions, and highlight the benefits when AI is used responsibly. It also reinforces the idea that schools and families are partners in navigating today’s onlife world, working together to ensure that AI is a tool for growth rather than a source of risk.
By opening the door to dialogue, schools show parents and caregivers that their voices matter and that student well being remains the top priority. In short, a parent/teacher night is not just a courtesy, it’s an essential step in building confidence and community around the responsible use of AI in education.
ChatGPT can be a powerful teaching and administrative partner when used wisely. The key is to treat it like any other online platform, that being to share ideas, not identities. By following safe practices, checking institutional policies, and engaging parents and caregivers in the conversation, educators and administrators can unlock the benefits of AI while protecting their students, colleagues, and communities.
For school administrators, we have attached a poster based on this article that you can post in your staff rooms as a reminder for all your educators. Also please share this article with your educators to provide context to the poster. Knowledge and the understanding and application of that knowledge is power!

If you are looking for an in-depth Professional Development program designed for Educators on the topic of AI, check out our Pro-D educator program that you can locate here: https://www.thewhitehatter.ca/programs/raising-ai-ready-youth-safety-privacy-ethics-and-success-plans
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech