Determined by the Executive Vice-President for Education on March 11, 2026
1. Overview
Since its founding, under its tradition of academic freedom, Kyoto University has pursued a mission to tackle multifaceted challenges and contribute to harmonious coexistence within the global community.
Its mission with regard to education is to preserve outstanding knowledge and cultivate a creative spirit. In accordance with that mission, the university has established the AI Initiative for Education and Learning at Kyoto University to examine the use of AI in education from ethical, legal, and societal perspectives, and maximize its value while minimizing risks to learning outcomes.
The initiative aims to establish an environment in which faculty, staff, and students can responsibly utilize generative AI in education and learning, thereby enhancing their ability to contribute to addressing various challenges. The emergence of generative AI represents an academic challenge that prompts a reexamination of the very nature of human intelligence. While recognizing its potential to promote a deepening of knowledge, the university will thoroughly explore both its merits and demerits through multifaceted discussion among its members to develop a new vision for education suited to an era of human-AI coexistence.
2. Core Principles
◆ Respect for Human Dignity: When using generative AI in education and learning, the university shall give top priority to respecting human dignity and avoiding the homogenization of knowledge. The university shall strive to protect privacy and ensure safety.
◆ Achieving Effective Education: The university will pursue education and learning that enables students to gain superior knowledge and acquire logical and critical thinking skills, as well as the ability to identify problems and solve them. The university will provide opportunities to learn effective methods for utilizing generative AI in learning and research, as well as strategies to avoid potential risks that it may pose (such as output of misinformation [hallucination], bias, information leaks, infringement of intellectual property rights or portrait rights, and deviation from learning objectives).
◆ Transparency and Accountability: The university will implement education that cultivates the ability to critically evaluate the basis and limitations of generative AI output, rather than accepting it unquestioningly, and make the process of using generative AI transparent. Humans must always bear the ultimate responsibility for their own outcomes.
◆ Fairness and Inclusiveness: The university will maintain a fundamental environment in which students with diverse attributes and backgrounds, including gender, region of origin, and field of study, can benefit from generative AI.
◆ Respect for Diversity and Consensus Building through Dialogue: To avoid the homogenization of knowledge that comes from using generative AI, the university will also respect the “freedom not to use AI.” Furthermore, by promoting dialogue between students, faculty, and staff, the university will seek to build more appropriate consensus when necessary.
3. Objectives
◆ Identify challenges and formulate countermeasures based on an understanding of actual usage within the university.
◆ Establish guidelines for the use of generative AI in education and leaning, revising the guidelines as necessary.
◆ Develop systematic educational content, ranging from AI literacy education to “AI for Science” (the use of AI in research).
◆ Provide support to faculty and staff regarding the use of generative AI in education.
◆ Develop and provide e-learning content and AI learning support tools for the use of generative AI to support autonomous learning by students.
◆ Establish a sound AI learning environment that takes data protection, fairness, and intellectual property rights into consideration.
◆ Share trial-and-error experiences in generative AI utilization and play a leading role in proposing the optimal form of education in the AI era.
4. Priority Areas
(A) Enhancement of Education for Students
◆ Development of guidelines for using generative AI in education and learning
◆ Establishment and provision of AI literacy courses
◆ Establishment and provision of “AI for Science” courses
◆ Development and provision of content and tools to support autonomous learning
(B) Enhancement of Support for Faculty
◆ Development of guidelines for using generative AI in education and learning (repeated item)
◆ Provision of faculty development (FD) initiatives relating to the use of generative AI in education
◆ Provision of information on the use of generative AI in education (website launch)
◆ Provision of staff development (SD) programs on support for teaching duties for teaching assistants (TAs), teaching associates (TASs), administrative staff, etc.
(C) Development of the Information Infrastructure
◆ Provision of an environment that enables the use of generative AI through university accounts
◆ Ensure connectivity with KULASIS and the Learning Management System (LMS)
◆ Strengthen personal information protection and data governance
5. Framework for Implementation
To maximize the value and minimize the risks entailed in utilizing generative AI in education and learning, the university will establish a forum for interdepartmental dialogue. With the Office for Educational Innovation playing the central role, the Center for Next-Generation Informatics and AI Education and Research, Institute for Liberal Arts and Sciences, Division of Graduate Studies, Academic Center for Computing and Media Studies, and Institute for Information Management and Communication will collaborate to plan and implement strategies for the utilization of generative AI in education and learning. For education related to “AI for Science,” the Office of Research Acceleration will participate in the collaboration as appropriate.
6. Action Plan
2025
● Conduct surveys (of faculty and students) to grasp the current situation (conduct the surveys annually to monitor changes over time)
● Establish guidelines for the use of generative AI in education and learning
● Launch a website about AI initiatives in education and learning
2026
● Begin providing a Google tenant for students
● Publish guidelines for the use of generative AI in education and learning (to be reviewed and revised as appropriate)
● Launch AI literacy courses and “AI for Science” courses
● Begin providing faculty development (FD) programs
2027
● Develop content and tools to support autonomous learning
● Launch staff development (SD) programs on support for teaching duties for teaching assistants (TAs), teaching associates (TASs), administrative staff, etc.
2028
Continue to implement measures for ongoing improvement
7. In Conclusion
Acquiring the ability to use generative AI is not simply acquiring the skill to use a tool; it means gaining the ability to examine the nature of knowledge, including generative AI, and tackle multifaceted challenges in a way that stays rooted in a deep understanding of humanity.
In the AI era, Kyoto University will develop an education environment that enables the inheritance of outstanding knowledge and the cultivation of a creative spirit through the collaborative efforts of students, faculty, and staff.
This initiative will continue to be revised as necessary.
Determined by the Executive Vice-President for Education on March 11, 2026
1. Introduction
Based on its tradition of academic freedom, Kyoto University has consistently emphasized education rooted in dialogue and independent learning. While the use of generative AI in education has the potential to enhance its effectiveness, it also carries numerous risks.
The use of generative AI, like driving a car, requires more than simply learning how to operate it; it also requires recognizing the risks underlying its convenience and a deep understanding of how to ensure safety. A particular characteristic of generative AI is that risks such as misinformation and the infringement of rights are difficult to discern. Therefore, when using generative AI, it is necessary to constantly envision such potential risks and act responsibly.
These guidelines outline key points regarding the use of generative AI in education and learning that should be mutually understood by faculty, staff, and students at this current point in time.
2. The Anticipated Benefits of Generative AI
◆ Improved Efficiency: By having generative AI assist with routine tasks, individuals can focus on more essential activities.
◆ Improved Learning Efficacy: Positioning generative AI not as a “tool that provides answers” but as an aid to organizing thoughts and generating ideas can broaden perspectives and ideas and hone logical and critical thinking skills.
◆ Meeting Diverse Needs: By using generative AI to provide support tailored to individual student needs, education and learning can be enriched to meet diverse requirements.
3. Anticipated Risks
◆ Output of Misinformation (Hallucination): Generative AI simply strings words together probabilistically and may output misinformation that is not based on fact. Blindly accepting such output severely compromises academic accuracy.
◆ Bias: Prejudices (racial, gender, etc.) contained in the generative AI’s training data may be reflected in its output.
◆ Information Leakage: If the input prompts are used as training data for the AI, there is a risk that unpublished research content, personal information, confidential information, etc., may be leaked.
◆ Infringement of Intellectual Property Rights/Portrait Rights, Plagiarism/Academic Misconduct: Using the output of generative AI may infringe intellectual property rights (copyright, etc.) or portrait rights. Presenting the output of generative AI as one’s own work may constitute academic misconduct.
◆ Deviation from Learning Objectives: Overreliance on generative AI may prevent students from acquiring the essential liberal arts education and specialized knowledge/skills expected to be gained at university, including foreign language translation/composition and programming.
◆ Diminished Critical Thinking and Creativity: Overreliance on generative AI risks not only failing to cultivate logical and critical thinking skills, but may actually diminish the ability to compose creative sentences and communicate. Relying on AI that generates output based on existing data could hinder the development of original ideas, a questioning attitude toward conventional knowledge, and the ability to analyze materials in detail.
◆ Psychological Dependence: Because generative AI fundamentally avoids outputting negative wording, interacting with it for prolonged periods of time on a daily basis could lead to problems such as a decline in psychological resilience towards the conflicts that arise in interpersonal communication and social activities, and even psychological dependence on generative AI.
4. Necessary Countermeasures
(1) Countermeasures by Students
◆ Conscious Use: Recognize the benefits and risks of generative AI. Use it consciously to maximize its effectiveness in line with your learning objectives (and ultimately your purpose for studying at university), and exercise ethical consideration.
◆ Input and Output Precautions: Avoid carelessly entering privacy-related information, confidential information, or copyrighted material as prompts or uploaded data. When using output results, verify that they do not infringe copyrights, portrait rights, etc. (i.e., whether they resemble other people’s copyrighted material or portraits).
*Please refer to the attached document for a summary of input and output precautions.
◆ Critical Evaluation and Personal Refinement/Your Own Thoughts and Words: Always question AI-generated output and conduct rigorous verification based on primary sources (e.g., fact-checking). Consciously consider potential biases. Rather than uncritically accepting the output of generative AI, explore alternative perspectives and use the AI output in your own thoughts and words.
◆ Disclosure of the Usage Process: When using AI to create material beyond the scope of minor editorial revisions, maintain records of the work process. If requested, clearly state which AI tools were used, at which stages, and in what way. Using AI outputs in your own work may constitute plagiarism (copy and pasting) of other sources and could be deemed academic misconduct.
◆ Compliance with Usage Policies: When doing classwork (including writing reports) or writing papers, follow the generative AI usage policies specified by your instructor or academic advisor.
◆ Ensure Opportunities for Human Interaction: If your daily generative AI usage regularly exceeds three hours, if you find yourself consulting generative AI more often than people, or if interacting with other people feels burdensome, consciously limit your generative AI use and create opportunities for human interaction to prevent dependency on generative AI.
(2) Countermeasures by Faculty Members
◆ Clearly Explain the Usage Policy for Generative AI: Clearly state the usage policies for generative AI to students in alignment with the goals of the course or research guidance (e.g., encourage active use, permit limited use, or do not permit use). In particular, for courses in which generative AI use is mandatory to cultivate generative AI utilization skills, or in which generative AI use is not permitted in order to cultivate the specialized knowledge/skills that are prerequisites for using generative AI, the policy should be clearly stated in advance in the syllabus, etc.
◆ Refining Assignments: Avoid setting tasks that can easily be solved by generative AI and promote more substantive learning by combining diverse types of assignment (e.g., combine report assignments with written exams, oral explanations, or demonstrations).
◆ Encourage Critical Thinking and Creativity: Even when recommending the use of generative AI, strive to cultivate critical thinking and creativity by having students compare AI outputs with their own ideas or engage in discussions with other students.
◆ Ensure Learning Opportunities: Even if the use of generative AI is recommended, alternatives should be provided to ensure that students who do not wish to use it are not disadvantaged (except in courses specifically aimed at developing generative AI skills).
◆ Ensure Fairness in Grading: In courses focused on acquiring fundamental knowledge or specialized skills, verify understanding through written or oral exams, etc., to ensure that students who use generative AI too readily do not gain an advantage.
5. Summary
Kyoto University aims to develop new approaches to education and learning through collaboration among faculty, staff, and students to meet the demands of an era in which humans and AI coexist. To realize the core principle of passing on outstanding knowledge and cultivating a creative spirit, it is vital to pursue education and learning that avoids the risks associated with the use of generative AI while maximizing its benefits.
Please note that these guidelines will be revised as necessary.
For Reference
Institute for Information Management and Communication, Kyoto University: “Information Security Precautions for Generative AI Services and Generative AI Services on Campus” (in Japanese only)
https://www.iimc.kyoto-u.ac.jp/ja/info/20251212141618
University of Osaka: “Generative AI Teaching Guides” (in Japanese only)
https://www.tlsc.osaka-u.ac.jp/project/generative_ai/
University of California, Los Angeles (UCLA): “Creating a Generative AI Policy for Your Course”
https://teaching.ucla.edu/news/creating-a-generative-ai-policy-for-your-course/
AI Assessment Scale (AIAS)
https://aiassessmentscale.com/
----------
Attachment:
The Kyoto University Guidelines for the Use of Generative AI in Education and Learning Requirements for Students (Input and Output Precautions)
When using generative AI, it is important to be careful not to input privacy-related information, confidential information, or copyrighted works as prompts or uploaded data. Additionally, be aware that the output may potentially infringe upon others’ intellectual property rights (such as copyright) or portrait rights.
Particular attention should also be paid to whether the input data will be used to train the AI model. Generative AI services that utilize user input for training may result in your input being included in outputs generated by others using the AI, resulting in the risk of information being leaked or disclosed.
To avoid such issues, it is important to use services that explicitly state they do not use input information for training, or services that allow users to choose not to have their input data used for training (referred to as “opting out” of training).
Note:
In the case of the generative AI tools (such as Gemini or NotebookLM) used within the Google Workspace for students, input data and uploaded materials will not be reused for training AI models, in accordance with the provisions of the “Generative AI in Google Workspace Privacy Hub.” This means input information will not be used for Large Language Model (LLM) training (i.e., training opt-out is guaranteed).
Please note, however, that “not being used for training” does not equate to “zero risk of information leakage,” and exercise due caution.
Below, information that may potentially be input into generative AI is categorized into three types: A: Information that should never be input at all, B: Information requiring careful handling for input/output, and C: information that is acceptable to input. Examples are provided for reference.
Category A: Information That Should Never Be Input into Generative AI
Among privacy-related information, highly identifiable personal details, such as names, Individual Numbers (My Number), passport numbers, residence card numbers, and facial photographs should never be input into generative AI services. This is because the damage caused by the leakage or disclosure of such information is extremely severe, and in many cases, the consequences are irreversible if such an incident occurs. The same applies to financial information such as bank account numbers, credit card numbers, and contact information such as addresses, phone numbers, and private email addresses.
In addition, if conducting research as part of a laboratory, you may handle other people’s privacy-related information as part of your research data. Such information may constitute confidential information. Please refer to section [B](b).
Note:
Even when opt-out settings (that prevent input data from being used to train AI models) are enabled, information that falls under Category A should generally never be entered. Immediate leakage or misuse of such information could cause severe harm, and the risk is not substantially reduced by whether or not it is used for training. Furthermore, when transcribing interview videos for research purposes using AI with opt-out settings, you must obtain the subject's permission in advance.
Category B: Information Requiring Careful Handling for Input/Output
(a) Privacy-Related Information
Information such as age, gender, nationality, faculty affiliation, ideology/beliefs, and illnesses/mental health concerns may be input provided it is not linked to highly identifiable personal information, as described in Category A. However, as combining privacy-related information can increase the likelihood of identifying an individual, you must always exercise caution regarding the anonymity of input data.
(b) Confidential Information
While students may not often handle confidential university information, instances of handling sensitive information may arise if they are employed as a teaching assistant (TA) or research assistant (RA), or if affiliated with a research lab and conducting research. For example, unpublished research data falls into this category.
While inputting such information into external services may be restricted, input may be permitted for services confined to the campus environment or services institutionally contracted by the university. Students should therefore be sure to confirm the handling procedures for such information with their supervisor or academic advisor.
An additional important point to note about the behavior of generative AI services is that the information input may be used to train AI models shared across the entire service. In such cases, the information input may be indirectly included in the outputs generated by other users, potentially resulting in information leakage or disclosure. As the leakage of confidential information is unacceptable, even if you are permitted to use external services, you should choose a service that allows you to opt out of data training.
(c) Copyrighted Material and Portraits
Copyrighted material may be input if the purpose of use falls within the scope of personal use or within a course of instruction at an educational institution. For example, if you want to use generative AI to summarize or translate an appropriately obtained paper, you can do so provided the output results are not shared with others.
However, if the service you use does not have an opt-out setting, the prerequisite of “not sharing output results with others” may be compromised. Therefore, be sure to check the opt-out setting before using the service.
If you wish to share output results with others, be aware that the generative AI may output the input copyrighted material exactly as it was or produce similar information. Furthermore, due to the nature of generative AI, even if the input information does not contain copyrighted material, the AI might generate output similar to existing copyrighted material based on its internal information. If the output contains information similar to copyrighted material, that portion may be subject to copyright. Therefore, when generating output, investigate whether it is similar to other people’s copyrighted work, and do not share the results if there is a risk of infringement. For a deeper understanding of the relationship between generative AI and copyright, please refer to the materials from the Agency for Cultural Affairs’ copyright seminar “AI and Copyright.”
Caution is also required regarding portraits in outputs. If an output image is recognizable as the portrait of a specific person, that portion may be subject to portrait rights. Therefore, when generating output, investigate whether they resemble another person’s portrait, and if there is a risk of infringement, do not use the results. For a deeper understanding of the relationship between generative AI and portrait rights, please refer to the Ministry of Economy, Trade and Industry’s “Guidebook for Utilizing Generative AI in Content Creation.”
When inputting information such as that described in items (a), (b), and (c) into generative AI services, the reliability of opt-outing and the certainty that confidentiality will be protected are likely to vary significantly depending on the specific service. In general, services confined to the campus environment are considered to be most reliable, followed by external services contracted by the university as a whole, and then individually contracted or free services. It is important, therefore, to use an appropriate service based on the type and sensitivity of the information being handled. For a deeper understanding of this point, please refer to “Information Security Precautions for Generative AI Services and Generative AI Services on Campus” produced by Kyoto University’s Institute for Information Management and Communication.
Note:
If opt-out settings are enabled, input of Category B information may be permitted in certain circumstances, as described in items (b) and (c), above. Inputting unpublished research data itself should generally be avoided. However, information that has been sufficiently de-identified and abstracted, such as background explanations about the research, general discussions about methodology, or organizational and logical frameworks, may be permitted for use, subject to approval by your academic advisor.
Caution must continue to be exercised regarding the nature and granularity of information, particularly the handling of output. Always consult with the course instructor or your academic advisor before using such output.
If the copyrighted material entered is not used to train the AI model, the risk of that material being reused in others’ output is reduced. This makes it safer to summarize, translate, and assist with compositions for educational and learning purposes. However, as described in item (c), above, if the output results are shared with a third party, the user is still responsible for verifying whether there is any copyright infringement.
Category C: information That is Acceptable to Input.
It is acceptable to input information for general academic discussion, verification of hypothetical scenarios, advice on the structure of reports, general programming questions, or language instruction and example sentence generation into generative AI, provided it does not fall under Categories A or B.
March 11, 2026
To all students:
Hiroshi Kokubu, Executive Vice-President for Education
Acquiring the ability to use generative AI is more than simply acquiring the skill to use a new tool; it means gaining the ability to examine the very nature of knowledge, including generative AI, and tackle complex, multifaceted challenges while addressing fundamental questions, such as what it means to be human, what language is, and what knowledge is. While cultivating that ability, we must constantly consider the possibilities and risks associated with its use, and continually consider how much we should entrust to AI and how much we should personally take responsibility for.
Generative AI is fundamentally transforming not only the way knowledge exists but also the very speed at which we think and express ourselves, and the way in which we connect ideas. By enabling access to previously unseen materials and discussions across disciplinary and linguistic barriers, it allows us to connect knowledge from different fields in new ways and formulate new questions. By making concepts and methodologies outside one’s own field of specialty more accessible, it has the potential to accelerate individual learning and significantly increase the entry points for research. By utilizing generative AI appropriately, the scope of inquiry, which was previously constrained by limited time and learning, can expand in all directions, opening up academic endeavors to become more interdisciplinary and creative.
However, we must also carefully consider the limitations and risks that this technology brings with it, despite its vast potential. The reality is that it is far from easy to remain independent of technology. Even when we believe we are using it carefully, it can unconsciously limit the breadth of our thinking and the framework of our judgment. It has also been observed that the efficiency and convenience brought by generative AI are inextricably linked to excessive dependence. As this technology becomes more commoditized in the future and more people come to rely on generative AI, imagine a situation where humans lack specialized knowledge, advanced linguistic skills, thinking ability, imagination, and memory. Imagine a situation where you are required to think on your feet in an environment in which the technology is unavailable. Imagine yourself having no choice but to uncritically accept the words, images, or videos that appear before you.
In the future, a time may come when you find yourself telling the younger generation about the intellectual training you received at university. What will you tell the younger AI-native generation about your time of intellectual cultivation? As we experience an increase in the homogenization of language and images, what kind of intellectual training will you choose from now on? As the meaning of “through your own effort” evolves, what will you include in the scope of “your own”? Of course, there are no pre-determined answers to such questions. This is Kyoto University, so you should provide your own answers.
The important thing is to be aware of the freedoms and limitations that technology creates, and to continually question how we think, how we choose our words, and how we make decisions within that environment. Freedom is born from the act of continuing to think despite accepting limitations. In any era, only those who are aware of limitations can create freedom. It is precisely the experience of overcoming those limitations and creating freedom that will eventually become the “history of knowledge” that you yourself can share with the next generation. In conclusion, I hope that you, who enrolled at Kyoto University through a great deal of effort, will consider the question, “Why on Earth did I come to university?” and engage with this new technology—which has the potential to be both creative and disruptive—with the aspiration and resolve to create new knowledge in this new era.