AI Ethical Guidelines

A working group of the Task Force on AI in Academic Affairs was charged with developing ethical guidelines and best practices regarding the creation and use of artificial intelligence within NIU academic affairs. To accomplish this goal, members gathered and reviewed reports and guidelines regarding ethical issues and questions pertaining to AI as developed by higher education institutions and by other expert sources (i.e. UNESCO, U.S. government). The working group then identified the most relevant ethical issues to NIU and its stakeholders. The following ethical guidelines for AI use were developed around NIU’s mission, vision, and core values.

Curiosity and Creativity

Responsible Innovation

Foster a cultuyre of accountable teaching, learning, and AI use that prioritizes all people and their rights and addresses potential risks while maximizing benefits.

AI Literacy

Foster the development of AI training and curricula that enable faculty, students, and staff to make informed decisions regarding the use and effects of AI.

Opportunity

Ensure that all faculty, students, and staff--regardless of background, ability, discipline, or status--have equitable opportunity to AI tools, resources, and training.

Equity and Inclusion

Multidisciplinary

Promote diverse participation in the implementation of AI systems to ensure that a wide range of perspectives and needs are considered.

Bias Awareness

Raise awareness among the university community about the potential for bias in AI and provide training on how to identify and address it.

Community Responsibility

Prioritize the well-being and autonomy of the individuals whose data is used in AI applications.

Equability

Ensure that the deployment, access to, and use of AI does not disadvantage or harm any individual or group.

Ethics and Integrity

Accountability

Establish clear lines of responsibility for the development and use of AI. Individuals, groups, or administrative units must be accountable for AI's outcomes and impacts.

Human Oversight

Design and use AI systems with appropriate human oversight. Critical decisions that have lasting impact on students, faculty, staff, or the wider university community (e.g., admissions, grading, qualifications) must not rely solely on AI output without review and judgment.

Ethical Training

The university has a duty to develop or make available training on AI ethics, responsible innovation, and the potential societal impacts of AI. All members of the university community involved in the development or use of AI have the personal responsibility to complete AI training.

Transparency

Assess multiple factors to prioritize transparency and determine appropriate disclosures of the use of AI. Consider the use case for an AI technology, the sensitivity of the data being put into AI supported applications and/or created by them, and the effect on the stakeholders.

Service and Stewardship

Privacy

Consider the sensitivity of data when selecting and using AI applications. Exercise extreme caution with vulnerable populations and any personally identifiable information. Care and attention should also be paid to how a particular AI application stores, uses, and learns from that data.

Protection and Security

Review data protection measures as early as possible in preparing to select and/or implement AI applications to ensure appropriate safeguards, including consultation with subject matter experts.

Environmental Impact

Consider the environmental impact of AI technologies, including energy consumption and resource utilization. Promote the development and use of sustainable AI practices.


Creative Commons License

This framework is shared under a Creative Commons Attribution-NonCommercial 4.0 International License.

Contact Us

Center for Innovative
Teaching and Learning

Phone: 815-753-0595
Email: citl@niu.edu