Jump to content

Wikipedia:Artificial intelligence

From Wikipedia, the free encyclopedia
(Redirected from Wikipedia:AI)

Artificial intelligence (AI) is used on a number of Wikipedia and Wikimedia projects. This may be directly involved with creation of text content, or in support roles related to evaluating article quality, adding metadata, or generating images. As with any machine-generated content, care must be used when employing AI at scale or in applying it where the community consensus is to exercise more caution.

When exploring AI techniques and systems, the community consensus is to prefer human decisions over machine-generated outcomes until the implications are better understood.

What is Wikipedia's AI policy?

[edit]

Wikipedia:Writing articles with large language models (WP:LLM) comprises one main point:

The use of LLMs to generate or rewrite article content is prohibited

Exceptions are made for basic copyediting and translation. The latter is covered by another dedicated guideline, Wikipedia:LLM-assisted translation (WP:LLMT), requiring the editor to carefully review the output before publishing it in articles.

Other policies and guidelines

Other policies and guidelines contain certain provisions that are specifically about AI-generated content. As of March 2026, they are as follows:

Additional resources

The following are not policies or guidelines, but still have some significance in this context:

Discussion timeline

[edit]

Want to update this table? Try using the visual editor to edit this page.

Applications

[edit]

AI-related efforts on Wikipedia include but are not limited to:

Revision scoring

[edit]

The Objective Revision Evaluation Service (ORES) was started in 2015 as a project of the Wikimedia Foundation, and provides a revision score against machine learning models that have been trained in order to report article quality or vandalism. This is used in tools such as ClueBot NG to help immediately revert vandalism, or in evaluation tools like the Program and Events Dashboard to measure the outcomes of classwork, edit-a-thons, or organized editing campaigns.

Text translation

[edit]

Guidance can be found at Wikipedia:LLM-assisted translation. Editors are required to be skilled enough in both the target language and English to verify the translation, and to check the output for AI hallucinations, core content policy violations, and text-source integrity. LLM-assisted translations must also comply with other translation requirements.

There is a Content Translation Tool used across Wikimedia projects that can use the output of machine translation from one Wikipedia article to another, using services like Google Translate. However, on the English Wikipedia, it currently states that "machine translation is disabled for all users and this tool is limited to extended confirmed editors." As a result, only manual translation on the English Wikipedia is supported by the tool, though some users have used translation to Simple English as a workaround. Relatedly, there is a section of the Help:Translation page with the broad advice: "avoid machine translations."

Article text generation

[edit]

The explosion of interest in ChatGPT in 2022 has led to increased curiosity in using generative AI to help compose Wikipedia articles. However, current consensus is that "the use of LLMs to generate or rewrite article content is prohibited." The status of machine-generated text from tools such as ChatGPT is generally accepted to be public domain, so the copyright issues are not a blocker to using the generated text from a legal standpoint. These issues are generally governed by Help:Adding open license text to Wikipedia#Converting and adding open license text to Wikipedia, which advises to make sure content is adjusted for style and that reliable sources are used.

Images and Commons

[edit]

Image metadata – There have been efforts from GLAM institutions to help supplement image keyword data with machine learning efforts. Among them include:

  • Computer aided tagging Started in 2019, "The computer-aided tagging tool is a feature in development by the Structured Data on Commons team to assist community members in identifying and labeling depicts statements for Commons files." See: c:Commons:Structured data/Computer-aided tagging
  • Metropolitan Museum of Art Tagging - This project used Met Museum tagging info to train a machine learning system to help predict new "depiction" recommendations for Wikidata. This resulted in a new Wikidata Game that helped add more than 4,000 new depiction (P180) statements to Wikidata. See the Met Museum blog post by Andrew Lih: "Combining AI and Human Judgment to Build Knowledge about Art on a Global Scale," March 4, 2019, [1]

Image generation

See also

[edit]

General

[edit]
  • Lih, Andrew (March 4, 2019). "Combining AI and Human Judgment to Build Knowledge about Art on a Global Scale". Metropolitan Museum of Art.
  • Davis, LiAnna (2026-01-29). "Generative AI and Wikipedia editing: What we learned in 2025". Wiki Education. Retrieved 2026-02-26. Includes a list of what it's good for and what it's not.

Wikimedia Foundation

[edit]

Demonstrations of generative AI using LLMs

[edit]

2025

[edit]

2024

[edit]

2023

[edit]

2022

[edit]
  • User:JPxG/LLM demonstration (wikitext markup, table rotation, reference analysis, article improvement suggestions, plot summarization, reference- and infobox-based expansion, proseline repair, uncited text tagging, table formatting and color schemes)

Misc

[edit]

References

[edit]
  1. ^ "Copyright and Artificial Intelligence". United States Copyright Office. Retrieved April 9, 2025.