Aligning Generative AI with Mission and Values

A university‑wide hub for practical guidance and resources

Principles for the Mission-Centered and Ethical Use of AI at the University of Puget Sound

The university’s use of AI must support our mission and reflect our core values of Excellence, Justice, Leadership, Creativity, Respect, Courage, and Inclusion. As AI use becomes more widespread, we recognize both the opportunities and the possible dangers of its use. The principles below are meant to guide the use of AI on our campus so that it aligns with our core values.

 

People icon

Human-Centered Focus

The use of AI should focus on ways to integrate AI in human centered and value guided pursuits. We recognize the potential dangers of AI replacing human expertise and commit to using AI to elevate–not replace–human excellence.

Shield icon

Accountability

We should use AI responsibly and always maintain human accountability for its use.

Window icon

Transparency

We should disclose the use of AI and make sure that decision-making that relies on AI is transparent to relevant stake-holders. In selection of AI tools, we prefer more transparent and accountable models.

Lock icon

Privacy and Security

We should safeguard the privacy of our campus community and not inappropriately disclose personal data or confidential records. AI access to university data must align with existing university policy and practices.

Scales icon

Fairness

We commit ourselves to oversee the use of AI on campus so as to promote fairness and mitigate the effects of any ethically problematic biases that AI models can reproduce.

Door Icon (Access)

Equal Access

We commit to promoting equal access and mitigating the negative impact of unequal access to AI tools and resources within our community.

Earth leaf icon

Sustainability

We take seriously the societal and environmental impacts of the use of AI and commit to mitigating negative impacts whenever possible.

Why These Principles Matter

Generative AI expands how we study, teach, and work; it also amplifies risks such as data exposure, bias, hallucination, and over‑reliance on automated output. The principles anchor every recommendation on this site, ensuring that Puget Sound’s engagement with AI supports student learning, effective administration, and the university’s mission.

 

AI & Human Values Group
A space for interdisciplinary exploration and innovation in artificial intelligence at Puget Sound

About AI at Puget Sound

This site is Puget Sound’s central resource on generative AI. It explains what the technology can and cannot do in an academic setting, guides the campus community through responsible practice, and points to tools the university has vetted. This site is a resource for everything from syllabus language for faculty, to a student‑friendly primer on AI bias and hallucination. The guidance presented here grows out of the AI Working Group’s collaborative process, which drew input from students, staff, and faculty.

AI Working Group Membership:
Executive Sponsors: Drew Kerkhoff (Provost) and Kim Kvaal (CFO)
Project Leads: Gareth Barkin (Dean of Operations & Technology), Francisco Chavez (CIO)
Faculty Members: America Chambers, Siddharth Ramakrishnan, Ariela Tubert
Administrative Representatives: Janet Hallman, Stacy Kelly, Kevin Riordan
Technology Services: Faithlina Abeshima, Aaron Tran
Diversity, Equity, and Inclusion: Lorna Hernandez Jarvis (VP OIED)