Responsible Use of AI in Advising
Virginia Tech provides advisors with access to university-approved artificial intelligence (AI) tools and platforms that align with our institutional principles and established governance frameworks (VT AI Framework, NACADA). All approved tools undergo evaluation for security, privacy, effectiveness, and alignment with university values prior to authorization for campus use.
The advising community applies a Human-in, Human-out (HIHO) approach to the use of AI. This approach ensures that human professionals initiate, guide, and validate all AI-assisted work, maintaining meaningful human engagement throughout the process. Within this method, advisors remain fully responsible for their interactions, decisions, and support provided to students. AI may be used to enhance a student’s experience and strengthen human-centered advising practices, but it does not replace human insight, creativity, and well-being.
Advisors must access and use AI tools exclusively through their Virginia Tech accounts and only when those tools are institutionally approved. When AI tools are used in contexts involving others (ex. transcribing appointments), explicit consent must be obtained from all participants. It is recommended that Advisors disclose when they have used AI to prepare materials or inform recommendations. Any summaries, drafts, or outputs generated by AI must be reviewed carefully to ensure accuracy, appropriateness, and remove bias, including the interpretation AI tools may generate from conversations between two or more parties. Advisors who identify concerning AI outputs may report these observations to Heather Whedbee to support ongoing evaluation of approved tools.
The advising community prioritizes the safety, security, privacy, and protection of individuals and proprietary university data. No personally identifiable information, education records, or student-specific information protected under FERPA may be entered into AI tools (this could include uploading images/screenshots, spreadsheets, audio, and slide decks) unless the system has been formally approved and configured for FERPA-aligned use. Advisors who plan to use AI tools should review the Data Risk Classification table on Virginia Tech’s Risk Classification Standards to understand the risk levels of FERPA data. Some AI Tools are approved for certain risk levels while others are not. It is important that advisors review these differences. 4Help Knowledge Base Articles on AI tools may be used to review appropriate risk level. Advisors retain the professional discretion to decline the use of unauthorized or unapproved AI tools proposed by other parties during advising interactions to remain compliant with university policy and privacy requirements.
Appropriate uses of AI tools within the advising community may include:
- Enhancing written communication and materials
- Generating ideas for programming, data analysis, and reporting
- Developing presentations, videos, trainings, and other instructional media
- Transcribing interactions (with documented consent)
- Supporting advising workflows and operational practices
When used responsibly, AI can enhance advising practices while upholding Virginia Tech’s standards for privacy, accuracy, accountability, and student trust. The advising community will continue to explore new applications as tools mature and governance evolves.
Support & Resources for Using AI:
VT Risk Classification Standards
NACADA AI Foundations e-Tutorial
This is a living document, last updated February 2026. Advisors should refer to ai.vt.edu for the most current information regarding approved AI tools, institutional guidelines, and governing principles.