Aligning Generative AI with Mission and Values
A university‑wide hub for practical guidance and resources
A university‑wide hub for practical guidance and resources
Join Professor of Anthropology Gareth Barkin for an examination of large language…
AI and Privacy aims to bring together scholars, practitioners, technologists, and…
A talk by Nita Farahany. We’re in an era where the lines between technology and the…
The university’s use of AI must support our mission and reflect our core values of Excellence, Justice, Leadership, Creativity, Respect, Courage, and Inclusion. As AI use becomes more widespread, we recognize both the opportunities and the possible dangers of its use. The principles below are meant to guide the use of AI on our campus so that it aligns with our core values.
|
Human-Centered FocusThe use of AI should focus on ways to integrate AI in human centered and value guided pursuits. We recognize the potential dangers of AI replacing human expertise and commit to using AI to elevate–not replace–human excellence. |
|
AccountabilityWe should use AI responsibly and always maintain human accountability for its use. |
|
TransparencyWe should disclose the use of AI and make sure that decision-making that relies on AI is transparent to relevant stake-holders. In selection of AI tools, we prefer more transparent and accountable models. |
|
Privacy and SecurityWe should safeguard the privacy of our campus community and not inappropriately disclose personal data or confidential records. AI access to university data must align with existing university policy and practices. |
|
FairnessWe commit ourselves to oversee the use of AI on campus so as to promote fairness and mitigate the effects of any ethically problematic biases that AI models can reproduce. |
|
Equal AccessWe commit to promoting equal access and mitigating the negative impact of unequal access to AI tools and resources within our community. |
|
SustainabilityWe take seriously the societal and environmental impacts of the use of AI and commit to mitigating negative impacts whenever possible. |
Generative AI expands how we study, teach, and work; it also amplifies risks such as data exposure, bias, hallucination, and over‑reliance on automated output. The principles anchor every recommendation on this site, ensuring that Puget Sound’s engagement with AI supports student learning, effective administration, and the university’s mission.
This site is Puget Sound’s central resource on generative AI. It explains what the technology can and cannot do in an academic setting, guides the campus community through responsible practice, and points to tools the university has vetted. This site is a resource for everything from syllabus language for faculty, to a student‑friendly primer on AI bias and hallucination. The guidance presented here grows out of the AI Working Group’s collaborative process, which drew input from students, staff, and faculty.
AI Working Group Membership:
Executive Sponsors: Drew Kerkhoff (Provost) and Kim Kvaal (CFO)
Project Leads: Gareth Barkin (Dean of Operations & Technology), Francisco Chavez (CIO)
Faculty Members: America Chambers, Siddharth Ramakrishnan, Ariela Tubert
Administrative Representatives: Janet Hallman, Stacy Kelly, Kevin Riordan
Technology Services: Faithlina Abeshima, Aaron Tran
Diversity, Equity, and Inclusion: Lorna Hernandez Jarvis (VP OIED)
© 2025 University of Puget Sound