Commons:Village pump/Proposals
This page is used for proposals relating to the operations, technical issues, and policies of Wikimedia Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; the latest archive is Commons:Village pump/Proposals/Archive/2026/01.
- One of Wikimedia Commons’ basic principles is: "Only free content is allowed." Please do not ask why unfree material is not allowed on Wikimedia Commons or suggest that allowing it would be a good thing.
- Have you read the FAQ?
| SpBot archives all sections tagged with {{Section resolved|1=~~~~}} after 5 days and sections whose most recent comment is older than 30 days. | |
Ratify Commons:AI images of identifiable people as a guideline
[edit]In a previous discussion, consensus was found to implement a policy related to AI-generated or AI-edited images of real people, leading to the proposed guideline at Commons:AI images of identifiable people. Whether that draft should be designated a guideline is the subject of the discussion below. The proposal began on 7 December 2025, and has been open for more than two months. There have only been two new votes in February thus far, so it seems ready for closure.
Among the objections were a couple arguments that we do not need a stand-alone guideline, but that was addressed by the earlier discussion. Several suggestions and objections were raised about the draft text, often as part of an oppose !vote, but sometimes as part of a support. For example, that publications on behalf of someone should be permitted
, clarification on "legal and moral" rights, and whether people who have been dead for a long time should be excluded. None of these saw sufficient engagement to modify an otherwise clear consensus to adopt the guideline as written.
Importantly, the proposed (and now adopted) version is not set in stone, but rather the guideline's starting point. As with any other guideline, issues with specific aspects of the text can be addressed on the talk page through normal consensus-building procedures. — Rhododendrites (talk) | 17:06, 15 February 2026 (UTC)
- The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Following the discussion at Commons:Village_pump/Proposals/Archive/2025/09#Ban_AI_generated_or_edited_images_of_real_people, I prepared Commons:AI images of identifiable people.
I am now seeking to have it officially adopted as a guideline.
@GPSLeo, Josve05a, JayCubby, Dronebogus, Jmabel, Grand-Duc, Pi.1415926535, Túrelio, Raymond, Isderion, Smial, Adamant1, Infrogmation, Omphalographer, Bedivere, Masry1973, and Ooligan: I believe this is everyone that participated in the original discussion. Please feel free to ping anyone if I missed them.
Cheers, The Squirrel Conspiracy (talk) 22:28, 7 December 2025 (UTC)
Support As proposer. The Squirrel Conspiracy (talk) 22:28, 7 December 2025 (UTC)
Support As a 'canvas'-by-ping user. JayCubby (talk) 22:38, 7 December 2025 (UTC)
Support Pi.1415926535 (talk) 22:47, 7 December 2025 (UTC)
Support. Omphalographer (talk) 23:15, 7 December 2025 (UTC)
Support Abzeronow (talk) 23:29, 7 December 2025 (UTC)
Support Ooligan (talk) 00:01, 8 December 2025 (UTC)
Support Grand-Duc (talk) 00:54, 8 December 2025 (UTC)
Support --Bedivere (talk) 01:02, 8 December 2025 (UTC)
Support with one caveat "The image in question was published by the person it depicts" should be "The image in question was published by the person it depicts or with their documented permission or approval." - Jmabel ! talk 02:53, 8 December 2025 (UTC)
- @The Squirrel Conspiracy: would you have any objection to that small edit? - Jmabel ! talk 05:13, 8 December 2025 (UTC)
- In principle, I think it's fine. In practice, I'm not sure what exactly that looks like though. I'm loathe to have people submit "documented permission" to VRT, because a) they're often backlogged as it is, and b) there's this loop where someone uploads a file, then it gets deleted for permissions reasons, then it goes through VRT and is restored, then it gets deleted for scope reasons (because while VRT agents can decline tickets for scope reasons, it seems like a decent number of agents are uncomfortable doing so) - it's a tremendous waste of volunteer resources and I can see a lot of AI images getting stuck in that loop. @Krd, thoughts? The Squirrel Conspiracy (talk) 05:53, 8 December 2025 (UTC)
- I think the sort of situation Jmabel is trying to address is content published on behalf of a person by a social media manager or similar. For instance, if a political figure were to post an AI-generated image on their social media, we wouldn't necessarily know whether it was personally posted by the politician or by their PR team, but it should probably be considered allowed regardless. Omphalographer (talk) 06:01, 8 December 2025 (UTC)
- @Jmabel: could the word "auspice" or a wording like "published by or on behalf of the person it depicts" work (with a footnote explaining that "on behalf of" shall mean by a person like a social media manager)? Regards, Grand-Duc (talk) 06:41, 8 December 2025 (UTC)
- That's one of two cases I had in mind. The other is after-the-fact endorsement. E.g. (this has happened) someone publishes an AI-generated image of Trump, Trump re-tweets it (or whatever you call the equivalent on Truth Social). Also (likely, but no examples offhand), someone approvingly links in social media or on their own web page, etc. to an AI-generated image of themself.
- FWIW I wasn't thinking VRT at all. I'd hope that seldom, if ever, arises. - Jmabel ! talk 22:26, 8 December 2025 (UTC)
- Also a good point. There's a lot of different ways that content can be posted on social media these days - posting, reposting, embedding offsite media, etc. IMO, we should treat all of these cases identically for the purposes of this guideline. Omphalographer (talk) 00:27, 9 December 2025 (UTC)
- @Jmabel: could the word "auspice" or a wording like "published by or on behalf of the person it depicts" work (with a footnote explaining that "on behalf of" shall mean by a person like a social media manager)? Regards, Grand-Duc (talk) 06:41, 8 December 2025 (UTC)
- I think the sort of situation Jmabel is trying to address is content published on behalf of a person by a social media manager or similar. For instance, if a political figure were to post an AI-generated image on their social media, we wouldn't necessarily know whether it was personally posted by the politician or by their PR team, but it should probably be considered allowed regardless. Omphalographer (talk) 06:01, 8 December 2025 (UTC)
- In principle, I think it's fine. In practice, I'm not sure what exactly that looks like though. I'm loathe to have people submit "documented permission" to VRT, because a) they're often backlogged as it is, and b) there's this loop where someone uploads a file, then it gets deleted for permissions reasons, then it goes through VRT and is restored, then it gets deleted for scope reasons (because while VRT agents can decline tickets for scope reasons, it seems like a decent number of agents are uncomfortable doing so) - it's a tremendous waste of volunteer resources and I can see a lot of AI images getting stuck in that loop. @Krd, thoughts? The Squirrel Conspiracy (talk) 05:53, 8 December 2025 (UTC)
- @The Squirrel Conspiracy: would you have any objection to that small edit? - Jmabel ! talk 05:13, 8 December 2025 (UTC)
Neutral While I am against this as a policyguideline, the community has spoken. So, nothing against ratifying it, but I don't want to support it. --Jonatan Svensson Glad (talk) 03:51, 8 December 2025 (UTC)- well, the wording suggests it would be a policy anyway, disallowing some AI materials de facto (or de jure, depending on how you interpret it) Bedivere (talk) 04:48, 8 December 2025 (UTC)
- The community has previously spoken on another proposal, not this proposal. Now, the community is hopefully speaking about this new proposal which is different from the earlier one. Prototyperspective (talk) 17:12, 8 December 2025 (UTC)
- it's somewhat insulting to imply that participants are confused or unaware; they've simply reached conclusions different from yours. Bedivere (talk) 22:08, 8 December 2025 (UTC)
- Good that I didn't imply that then. Prototyperspective (talk) 10:19, 9 December 2025 (UTC)
- it's somewhat insulting to imply that participants are confused or unaware; they've simply reached conclusions different from yours. Bedivere (talk) 22:08, 8 December 2025 (UTC)
Support Raymond (talk) 07:44, 8 December 2025 (UTC)
Support. --Túrelio (talk) 07:58, 8 December 2025 (UTC)
Support, looks good. --Belbury (talk) 08:45, 8 December 2025 (UTC)
Support GPSLeo (talk) 09:29, 8 December 2025 (UTC)
Support --Smial (talk) 11:41, 8 December 2025 (UTC)
Strong oppose The original proposal had AI generated photos where the description states that the photo shows an actual person are not allowed
but this new proposal now has the much more restrictiveImages of identifiable people created by AI are not allowed on Commons unless at least one of the following criteria are met [posted by the person or reliable sources cover it]
. I don't know if the voters here all know about this. I think it should be changed. There are two main issues:
- Example File:King Tutankhamun brought to life using AI.gif (display was disabled)
- Information graphics and art such as caricatures relating to public officials such as an information graphic or artwork pointing out problems of Trump behavior, claims & policies.
- It doesn't seem to exclude identifiable historic people. AI images can often make sense, especially when there is nearly no or no free media available of the person. An example is on the right.
- I think the votes were done hastily without proper deliberation and without consideration of potential uses. A policy this indiscriminate and restrictive additionally seems to violate existing policies COM:SCOPE, COM:INUSE and COM:NOTCENSORED. A constructive approach would be to edit the proposed policy but I would probably still tend toward oppose because I see no need for this – we should strive to stay as unbiased and uncensored as possible and delete files based on whether that is due per set/case. People could introduce more and more restrictions and soon you'll find yourself in a situation where you can't even upload an image critical of Trump anymore per policy (and with wider adoption of AI tools by society, this is what this policy will already achieve to a large extent).
- Prototyperspective (talk) 15:29, 8 December 2025 (UTC)
- It's bold of you to assume that everyone above you voted "hastily without proper deliberation and without consideration of potential uses". More likely, I think, is that the other participants simply disagree with you.
- Regarding the first point: "The image in question is the subject of non-trivial coverage by reliable sources" already covers the use case of "caricatures relating to public officials". The series of images that File:Trump’s arrest (2).jpg belongs to, for example, are permissible under this guideline. This guideline would not permit a random user's AI image caricature of Trump, but even without this guideline, it would be deleted as personal art.
- Regarding the second point: "It doesn't seem to exclude identifiable historic people.", that is working as designed. If it's a notable depiction, it'll be covered by "non-trivial coverage by reliable sources". If it's a random user's AI image of a historic figure, even without this guideline, it would be deleted as personal art. Keep in mind that the image you posted does not depict King Tut. It depicts what a probability engine thinks the promoter is looking for - a young boy with Arabic features in pharaoh attire. It has no way of knowing if any of what it did is accurate. This is why some projects have already banned most AI images.
- The Squirrel Conspiracy (talk) 16:45, 8 December 2025 (UTC)
- sincerely that "Tutankhamun" image is a disgusting AI slop. I can see why it is necessary to have these all (non notable) depictions banned. If someone wants to share their (prompted) art, there are venues such as Tumblr, Deviantart and Twitter (or whatever Elon Musk has decided to call it). Bedivere (talk) 16:52, 8 December 2025 (UTC)
- Nothing about is disgusting.
why it is necessary to have these all (non notable) depictions
ok: so why? Prototyperspective (talk) 17:05, 8 December 2025 (UTC)- They are fictional reconstructions produced by a model, not representatios of an actual person, making them potentially misleading and outside COM:SCOPE. Allowing non-notable AI depictions would open the door to massive amounts of invented imagery serving no educational purpose. Notable cases are covered by the exception. Bedivere (talk) 22:07, 8 December 2025 (UTC)
- So a public broadcast documentary showing some well-known historical figure is means that segment is noneducational and the documentary so so badly disgusting because they're showing a historical person differently than s/he may have looked? Prototyperspective (talk) 22:40, 8 December 2025 (UTC)
- in that case, the key would be that the recreation would most likely be a human creation or representation, not something created by an algorithm. Bedivere (talk) 00:57, 9 December 2025 (UTC)
- So a public broadcast documentary showing some well-known historical figure is means that segment is noneducational and the documentary so so badly disgusting because they're showing a historical person differently than s/he may have looked? Prototyperspective (talk) 22:40, 8 December 2025 (UTC)
- They are fictional reconstructions produced by a model, not representatios of an actual person, making them potentially misleading and outside COM:SCOPE. Allowing non-notable AI depictions would open the door to massive amounts of invented imagery serving no educational purpose. Notable cases are covered by the exception. Bedivere (talk) 22:07, 8 December 2025 (UTC)
- Nothing about is disgusting.
to assume that…
I didn't do so if you read my comment. This is a false statement.already covers the use case of "caricatures relating to public officials"
No, it doesn't. It means caricatures and critical works are reserved to the privileged few who got reported on in major publications. What chaos if we'd allow common citizens to release critical art and information graphics right?it would be deleted as personal art.
No, it wouldn't (necessarily). It depends on how educational/useful it is.a young boy with Arabic features in pharaoh attire
Exactly, and such things can be useful and interesting, especially if engineered to closely match data about the given person.no way of knowing if any of what it did is accurate
not the AI but the prompter. Prototyperspective (talk) 17:11, 8 December 2025 (UTC)
- sincerely that "Tutankhamun" image is a disgusting AI slop. I can see why it is necessary to have these all (non notable) depictions banned. If someone wants to share their (prompted) art, there are venues such as Tumblr, Deviantart and Twitter (or whatever Elon Musk has decided to call it). Bedivere (talk) 16:52, 8 December 2025 (UTC)
- It's bold of you to assume that everyone above you voted "hastily without proper deliberation and without consideration of potential uses". More likely, I think, is that the other participants simply disagree with you.
Support, with the addendum that publications on behalf of someone should also be permitted. --Carnildo (talk) 23:15, 8 December 2025 (UTC)
Support Infrogmation of New Orleans (talk) 01:21, 9 December 2025 (UTC)
Support the proposal and also
Support whacking User:Prototyperspective with a wet trout Apocheir (talk) 04:08, 9 December 2025 (UTC)
- Re trout, if I made an error point out which by addressing it (ideally refuting it).
Why do educational documentaries use fictional depictions of historical people if such can't be educationally useful? These are banned by this proposal as well. I always support truly considering and addressing points raised in every kind of community decision-making, especially when it's volunteers.- Another point I didn't mention earlier, the policy rationalizes itself with
When dealing with photographs of people, we are required to consider the legal and moral rights of the subject […] Commons has long held that files that pose such legal or moral concerns
but why would not apply to paintings or nonAI digital art of identifiable people? And does this really apply to neutral depictions of ancient historical people? There is no need for this policy considering the very low number of of such files Commons currently has.
- Another point I didn't mention earlier, the policy rationalizes itself with
- Prototyperspective (talk) 10:24, 9 December 2025 (UTC)
- Personal art about notable people was always not allowed as being out of scope. That is was only handled through the regular scope rules was never a problem because of the small amount of such uploads. Now with the AI tools available there are much more of such uploads. To avoid long discussions and case by case decisions, we need this new stricter guideline. GPSLeo (talk) 11:28, 12 December 2025 (UTC)
Personal art about notable people was always not allowed as being out of scope
False. Personal art by non-contributors is speedily deleted so this is an additional reason for why there is no need for this proposed policy. Other than that, I don't know of such a policy, especially not one that clarifies what is meant with "Personal art".Now with the AI tools available there are much more of such uploads.
Arguably false. There aren't many – currently just 99 in the cat. That's the number of files uploaded every ? two minutes maybe?- Moreover, a significant fraction of them are COM:INUSE, underlining that these files can be useful also on Wikimedia projects despite that the ones we have are not close to what is possible with these tools in terms of quality (and accuracy if data on appearance is available) but Commons isn't just there for only wikiprojects but also for e.g. documentary makers who often show fictional imagery of historical people (as stated earlier and which I could prove by linking to several such documentaries with the example timestamps).
To avoid long discussions and case by case decisions, we need this new stricter guideline
For personal art by non-contributors and hoaxes, files can already be speedily deleted without discussion. For files that are of low-quality or not useful, there generally are no lengthy discussions. Enabling users to discuss whether a file should be deleted is a point of COM:NOTCENSORED which this proposed policy would as far as I can see invalidate in terms of its current title/proposition. There are a lot of things where one may prefer to not enable discussion. I still see no need for a stricter guideline.
- Prototyperspective (talk) 11:41, 12 December 2025 (UTC)
- Personal art about notable people was always not allowed as being out of scope. That is was only handled through the regular scope rules was never a problem because of the small amount of such uploads. Now with the AI tools available there are much more of such uploads. To avoid long discussions and case by case decisions, we need this new stricter guideline. GPSLeo (talk) 11:28, 12 December 2025 (UTC)
- Re trout, if I made an error point out which by addressing it (ideally refuting it).
Oppose The page refers to "legal and moral" rights as a justification but doesn't cover cases where the legal and moral rights are expired. If there's another good reason to exclude pictures of, say Cleopatra or Genghis Khan, the policy needs to spell it out. -Nard (Hablemonos) (Let's talk) 17:27, 11 December 2025 (UTC)
- Editorial standards are moral rights too. Be seldom make editorial decisions for other Wikis on Commons, but here it is needed to protect our project. Having AI generated images of historical personalities, used to show how this person looked like, is against good journalistic standards. We still allow such images if created in the context of a relevant art project of scientific paper. But we do not want that ever user can just upload such content. GPSLeo (talk) 11:37, 12 December 2025 (UTC)
used to show how this person looked like
This is not the only use-case of such imagery. An example I made is a documentary film video about say Ancient Egypt and I noted I could provide evidence that such documentaries usually do include fictional imagery of historical people.is against good journalistic standards
Commons is not censored based on proposed "journalistic standards". Prototyperspective (talk) 16:12, 15 December 2025 (UTC)- I think the point is that living people have certain rights that dead people cannot have, and this proposal's main justification lies there. Editorial standards seem to be secondary to the proposal. whym (talk) 23:41, 5 January 2026 (UTC)
- Editorial standards are not moral rights; they're standards used by a certain organization. I see no evidence that journalistic standards exclude the use of tools to show how someone might have looked like. Wikipedia certainly uses much worse, random images produced by people who had no idea how the person may have looked, but by paint and not computers.--Prosfilaes (talk) 03:24, 8 January 2026 (UTC)
- FWIW, those have a certain value in terms of showing how someone was perceived in a given era. For example, all images of biblical figures are from people who had never seen them (unless we count visionaries as actual witnesses). A painting of Jesus by a notable artist has an historical significance that an AI image of Jesus does not, though it would be purely coincidental for either to be a good likeness. - Jmabel ! talk 03:47, 8 January 2026 (UTC)
- Editorial standards are moral rights too. Be seldom make editorial decisions for other Wikis on Commons, but here it is needed to protect our project. Having AI generated images of historical personalities, used to show how this person looked like, is against good journalistic standards. We still allow such images if created in the context of a relevant art project of scientific paper. But we do not want that ever user can just upload such content. GPSLeo (talk) 11:37, 12 December 2025 (UTC)
Support --ReneeWrites (talk) 23:11, 13 December 2025 (UTC)
Strong oppose No reason provided why this is needed when Commons:Scope already exists. --Trade (talk) 15:59, 15 December 2025 (UTC)
- @Trade I assume you mean
Support, otherwise the context is not clear for us :) --PantheraLeo1359531 😺 (talk) 16:03, 15 December 2025 (UTC)
- It might be a reaction to my deletion decision in Commons:Deletion requests/File:GPT-4o Studio Ghibli portrait of Barack Obama.png. Abzeronow (talk) 02:00, 16 December 2025 (UTC)
- "we should not have it because i dont want it" is not a very compelling argument Trade (talk) 16:35, 16 December 2025 (UTC)
- I didn't feel like posting a whole treatise for a DR close on how that AI portrait would likely violate the principles of en:WP:BLP and Obama's moral rights as well as the fact that an AI portrait is not an accurate representation of a person, and there is no educational reason why we'd need a Ghibli-style (which essentially violates the copyrights of Studio Ghibli btw) portrait of Obama when we have plenty of portraits of Obama that are educationally useful. Abzeronow (talk) 00:09, 17 December 2025 (UTC)
- "we should not have it because i dont want it" is not a very compelling argument Trade (talk) 16:35, 16 December 2025 (UTC)
- It might be a reaction to my deletion decision in Commons:Deletion requests/File:GPT-4o Studio Ghibli portrait of Barack Obama.png. Abzeronow (talk) 02:00, 16 December 2025 (UTC)
- @Trade I assume you mean
Oppose for its treatment of dead, especially long-dead, people. AI of living people is problematic. AI pictures of King Tut are not. That rule is far too much in telling the other projects that depend on us what they may use as illustrations.--Prosfilaes (talk) 07:13, 17 December 2025 (UTC)
- @Prosfilaes: what would you think of a rule about some number of years after death? - Jmabel ! talk 19:17, 17 December 2025 (UTC)
- I personally am not interested in diluting the policy for one person's objection when 18 people have already approved it as is. The Squirrel Conspiracy (talk) 23:52, 17 December 2025 (UTC)
- It is not one person. Moreover, things aren't just about the relative number of votes but also about the content of what people have written. Wikipedia for example has a policy about that, en:WP:NODEMOCRACY.
No reason has been given so far for why Commons should censor/disallow/entirely-delete images of the mentioned type in apparent tension and/or contradiction with other policies – namely at least COM:SCOPE and COM:NOTCENSORED – and with so far unclear need for it (implied also by there being no stated reason). Prototyperspective (talk) 00:06, 18 December 2025 (UTC)
- It is not one person. Moreover, things aren't just about the relative number of votes but also about the content of what people have written. Wikipedia for example has a policy about that, en:WP:NODEMOCRACY.
- Sure. Life+50 or life+70 are nice round numbers, and we should generally be able to find photographic evidence of anyone within that range. There are other people who have made similar objections, and such objections don't lead to good consensus.--Prosfilaes (talk) 02:02, 18 December 2025 (UTC)
- I personally am not interested in diluting the policy for one person's objection when 18 people have already approved it as is. The Squirrel Conspiracy (talk) 23:52, 17 December 2025 (UTC)
- @Prosfilaes: what would you think of a rule about some number of years after death? - Jmabel ! talk 19:17, 17 December 2025 (UTC)
Support with Jmabel's caveat. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 01:54, 18 December 2025 (UTC)
Strong support. I don't think we should be hosting deepfakes of any kind, to prevent the spread of misinformation, respect towards the person being depicted among many other ethical and social considerations. It's moon (talk) 03:35, 27 December 2025 (UTC) – Edited on 07:21, 29 December 2025 (UTC)
Support Surely one benefit of this guideline is that it will deter those who attempt to get around copyright violation by using AI-generated portrait. However, considering that the Commons version may differ from or even conflict with those of other communities, images that do not comply with this guideline should also be excluded from COM:INUSE rules. 0x0a (talk) 17:51, 28 December 2025 (UTC)
- @0x0a: that last (about this trumping INUSE) sounds like you are making a different proposal than the one about which everyone above has expressed their opinion. - Jmabel ! talk 01:46, 29 December 2025 (UTC)
- Um, I kinda believe INUSE also needs to be updated accordingly, so I opened a new discussion at
- 👉︎ Commons_talk:Project_scope#Proposed_change:_excluding_images_do_not_comply_with_COM:AIP_from_COM:INUSE_rules -- 0x0a (talk) 10:12, 29 December 2025 (UTC)
- @0x0a: I disagree. Part of the point of the guideline is to not use deepfakes and unaccurate representations of identifiable people not just on Commons but across all Wikimedia projects. Therefore all images that don't meet with the proposed guideline should, in my opinion, get deleted once the guideline gets ratified regardless of wether they are currently in use on other projects or not (with perhaps the only exceptions being images that get used to illustrate the concept of deepfake or similar itself → and even in those cases, they should probably still have been published by the person they depict). It's moon (talk) 10:58, 29 December 2025 (UTC) – Edited on 12:13, 29 December 2025 (UTC)
- Frankly, I don't know which of my statements you disagree. I clearly support this proposal and have already opened a revision discussion at Commons_talk:Project_scope regarding the conflicting part with the guideline. 0x0a (talk) 14:50, 29 December 2025 (UTC)
- Whoops, I misread INUSE, I thought you were saying that images used on other projects should be kept, which I disagreed on, but I am realizing you were saying they should get deleted, so turns out we both agree. It's moon (talk) 16:05, 29 December 2025 (UTC)
- Frankly, I don't know which of my statements you disagree. I clearly support this proposal and have already opened a revision discussion at Commons_talk:Project_scope regarding the conflicting part with the guideline. 0x0a (talk) 14:50, 29 December 2025 (UTC)
- @0x0a: that last (about this trumping INUSE) sounds like you are making a different proposal than the one about which everyone above has expressed their opinion. - Jmabel ! talk 01:46, 29 December 2025 (UTC)
- I think the oppose votes, even if they are the minority, raise valid points about living people and long-dead people. I’d suggest focusing on living people (and perhaps the recently deceased) for now. This is not to say anything goes for images of the dead, it would just be left undertermined in the meantime. I think that a narrower focus would allow us to ratify some important and non-controversial part of the proposal quickly with a broader support. We can continue working on the rest and additively revise the policy after that. whym (talk) 11:38, 5 January 2026 (UTC)
Oppose This seems over thought. Take the bit that's important, tweak it, and add it to COM:PIP. AI images of identifiable people are not allowed on Commons unless they have been published with the subject's permission or the image itself is the subject of significant public commentary in reputable sources.
There's no need to rehash a moral framework, define what a person is, or legislate interactions with overarching standards like SCOPE or DW. There's no need to add technical issues related to things like upscaling. Wherever that needs to go, it's not specific to identifiable people. There's no need to trying to define a boundary between substantially AI edited or AI generated. No need to get into what counts as a good source.The operative bit above sets the standard and people can sort out the finer details in vivo. GMGtalk 14:18, 5 January 2026 (UTC)
Comment Regarding AI images of long-dead people, while not necessarily problematic when it comes to legal and moral rights of the subjects, there are other factors that make these images unsuitable for an educative project like Commons. The example of Tutankhamun illustrates this perfectly. We have multiple forensic studies that reconstruct Tutankhamun's appearance based on the actual structure of his skull and mummy (see [1], [2], [3], [4], [5], [6]). However, files such as File:King Tutankhamun brought to life using AI.gif are problematic because they are historically inaccurate, overly idealized misrepresentations. This just comes to further show how Generative AI can and will make false assumptions about historical subjects and introduce misinformation. It's moon (talk) 14:50, 5 January 2026 (UTC)
- What if a Wikibooks chapter wants to discuss misinformation using AI-generated Tutankhamun images as illustrations? whym (talk) 23:38, 5 January 2026 (UTC)
- I had seen that study before my post with that gif earlier FYI and I'm well aware of scientific facial reconstruction.
- First of all you're making the false assumption that the educational function of media showing ancient people is primarily or even only to educate people on how exactly precisely the given people looked like. That is not necessarily the case, probably not even usually. If I wanted to make an educational podcast video about King Tutankhamun talking about historical facts and the peculiarity of his young age, it would be more interesting if it had some visuals. Such an animation even if not accurate to the most precise tiniest of detail would help the listener to visualize and better imagine what is being talked about plus it makes them take up more information as the content is not dull and boring but exciting. An example here is the Fall of Civilizations podcast that I sometimes enjoy listening to. It also has some visuals to it on YouTube – do you think it's accurate to the last detail? Example Ep 18 Fall of the Pharaohs (1.1 M views) such as its depiction of Ramesses. (Btw I made some educational podcast in the past and went to Commons to find free media to use which was often so gappy that I had to first upload relevant media to here from elsewhere and see how AI media can be useful for podcast&documentary-making sometimes depending on various factors such as how it's contextualized etc.)
- It depends on how the file is used. If it's used in a Wikipedia article where the text implies or the caption says basically 'This is how Tutankhamun exactly looked like' then it's problematic. But the problem there is how it's used, not that it's on Commons.
- The gif actually looks quite similar the scientific reconstruction. Maybe you think it's of utmost importance that even the tiniest of facial details is exactly accurate in any depiction and everything else is "misinformation". But that's not what matters to many or in many contexts, such as when the media is not contextualized as to be a very realistic restoration and the subject is just e.g. the young age of Tutankhamun. Moreover, most paintings, especially historic and ancient ones are very inaccurate.
- The question is not whether there are studies that reconstruct a given person's face – and for most notable long-dead people there aren't any – but whether the media is on Commons / free-licensed. There's basically one person (big thanks to him) who creates (static) restorations of notable people – ~150 files in Category:Works by Cícero Moraes – and sometimes (probably fewer than these) some free-licensed image in some study or elsewhere to import. For many notable subjects there aren't media. Key here is that just because a file is on Commons, doesn't mean it has or needs to be used. Lastly, AI tools here can be leveraged to create scientifically accurate free-licensed depictions of people: one can prompt with descriptions of the scientific reconstruction and additionally select and adjust the results until one has a result where the appearance matches that of the scientific reconstruction.
- Prototyperspective (talk) 00:22, 6 January 2026 (UTC)
- @Prototyperspective: I think that most regulars understand the proposed policy not as tool for an absolute prohibition of AI generated depictions of (long-dead) persons, but rather as a quality-assurance tool to stem any influx of such imagery without clear-cut use case. I as supporter certainly do.
- I see the current situations as: "Upload first, ask later", and without robust tools to have a redactional overview over AI generated imagery. It's kind of similar to "shall issue states" in the US in regard to firearm laws and concealed carry. I think that most supporters are advocating for the alternative of "Ask yourself first if AI is useful, then if yes, upload", the default being "Don't upload" (or delete by due process if uploaded anyway). Such a mindset in regard to AI slop and AI generated imagery in general would be a robust tool for the needed curating. To return to the concealed carry example: we should switch from a "shall issue" to a "may issue" style of permit. This implies that, of course, an AI generated Tutankhamun image with a demonstrated solid use case (like the Wikibooks thing above your post), can, may stay. I'm advocating for that such AI imagery imperatively needs a worked-out context in its description (prompt, use case, ideally the sources) besides the demonstrated need of actual use somewhere; otherwise it's liable to get deleted.
- Lastly, you wrote
AI tools here can be leveraged to create scientifically accurate free-licensed depictions of people: one can prompt with descriptions of the scientific reconstruction and additionally select and adjust the results until one has a result where the appearance matches that of the scientific reconstruction.
As it stands now, the tools available to the general public (ChatGPT, DALL-E, Stable Diffusion...) are built in a way to generate eye candy (as you wrote on the German Forum, I could also refer de:Klickibunti), not scientifically sound media, as that is likely expected by the general public, by their users. Some software that is specifically made for scientific reproductions (like forensic face generation, digitally aging or similar) won't be within the purview of this policy. Regards, Grand-Duc (talk) 18:22, 6 January 2026 (UTC)- Reasonable point but I disagree: there is no flood of AI imagery and this proposed policy probably won't be much of an help with this nonproblem if it was a problem and it's redundant due to policies COM:SCOPE and COM:DIGNITY while in direct contradiction with COM:NOTCENSORED and, as explained above, COM:SCOPE where the minor potential benefits are not worth the inconsistency and problems that come with this proposed policy. People can already nominate any such or many such files at once for deletion.
- The Tutankhamun animation has two educational use-cases I can readily think of and we shouldn't assume we can and need to be able to readily think of potential use-cases:
1. as part of some video or page about Tutankhamun where the animation is not contextualized as to being precise to the last facial wrinkle but just some rough AI visualization eg showing his young age 2. as an illustration of how AI tools can be used to visualize people such as ancient people in moving (nonstatic) format (that is even if some say the quality is low). are built in a way to generate eye candy
I know they are not built for what I described to be easy. That doesn't mean they can't be used for that. People could for example learn about this use-case and the current issues with it and adjust these tools or use them in sophisticated ways to create better-quality results of that type.Some software that is specifically made for scientific reproductions
I'm not talking about other software though. The current models can already be used for this. It's just not easy. Many people think using AI tools is always easy but it isn't – just the way maybe most people use them is simple but some people use them in more sophisticated ways that need a lot of skill and expertise. I outlined roughly how these tools, including just standard Stable Diffusion etc, can be used for reproductions of scientific accuracy and you seem to have overread or ignored that. This can already be done, I'm just not skilled enough with these tools plus also not motivated enough to spend my time and effort on it to prove it to you right now. My prior low-effort uploads relating to this are more about (enabling) communicating the concept and idea – this again can lead to people working on fleshing out this application for higher-quality results via adjusting or building tools and developing workflows. But again, not for every application does each facial detail matter such as for the podcast linked above where also at least one ancient person is depicted without scientific precision level accuracy (btw typo it has 11 M views, not 1.1 M). Prototyperspective (talk) 19:19, 6 January 2026 (UTC)- You repeatedly claim that editors overread or fail to deliberate whenever they disagree with your views ([1], [2], [3]).
- My stance is that we need to build policies based on how AI is currently being used, not how it could or may theoretically be used. I'm not against changing the policy later down the line if we see a change in AI accuracy or a tendency to a more responsible usage, but for now we have to address the current reality. It's moon (talk) 21:20, 6 January 2026 (UTC)
- Your claims are ad hominem argumentation, and I will not stand for them. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 21:42, 6 January 2026 (UTC)
- @Jeff G.: Could you clarify on who you are replying to? It's moon (talk) 22:00, 6 January 2026 (UTC)
- @It's moon: I was replying to Prototyperspective, referencing your characterization of their claims. Sorry for not specifying that, I thought my indentation was clear. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:49, 6 January 2026 (UTC)
- Understood, thanks. It's moon (talk) 23:05, 6 January 2026 (UTC)
- @It's moon: I was replying to Prototyperspective, referencing your characterization of their claims. Sorry for not specifying that, I thought my indentation was clear. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 22:49, 6 January 2026 (UTC)
- Absurd claim; if you ignore all I said in my comment imo it's better to not comment at all. Prototyperspective (talk) 22:54, 6 January 2026 (UTC)
- @Prototyperspective: Better for you, maybe. I didn't ignore it, I agreed with @It's moon's characterization of it. I asked you nicely in this edit 16:09, 7 November 2024 (UTC) to stop with the insults and displaying your pro-AI bias. Now, I am warning you: if you do it again, I am going to report you. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:21, 6 January 2026 (UTC)
- I'm not insulting anybody and didn't make any ad homininem and am nicely asking you to please not accuse me of things I'm not doing, thanks. Prototyperspective (talk) 23:29, 6 January 2026 (UTC)
- @Prototyperspective Did you or did you not write "you ignore all I said in my comment" 22:54, 6 January 2026 (UTC)? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:36, 6 January 2026 (UTC)
- This is not an insult. It was a rational point that your comment did not address nor relate to anything I wrote (where btw imo a constructive rational response would be to prove me wrong by pointing to the specific text segment to which your comment does relate if there was any but there isn't any "ad homininem" in there, let alone is it all just that). With "ignore" I meant you didn't address any of it which is of course one can do but I'm also free to point it out even if you disagree with that assessment. Prototyperspective (talk) 23:44, 6 January 2026 (UTC)
- @Prototyperspective Did you or did you not write "you ignore all I said in my comment" 22:54, 6 January 2026 (UTC)? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:36, 6 January 2026 (UTC)
- I'm not insulting anybody and didn't make any ad homininem and am nicely asking you to please not accuse me of things I'm not doing, thanks. Prototyperspective (talk) 23:29, 6 January 2026 (UTC)
- @Prototyperspective: Better for you, maybe. I didn't ignore it, I agreed with @It's moon's characterization of it. I asked you nicely in this edit 16:09, 7 November 2024 (UTC) to stop with the insults and displaying your pro-AI bias. Now, I am warning you: if you do it again, I am going to report you. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 23:21, 6 January 2026 (UTC)
- @Jeff G.: Could you clarify on who you are replying to? It's moon (talk) 22:00, 6 January 2026 (UTC)
- Re.
there is no flood of AI imagery
- my experience speaks otherwise. I've seen a ton of clearly AI-generated images uploaded to Commons, including a substantial number of AI-generated or heavily AI-retouched images of people. Omphalographer (talk) 21:51, 6 January 2026 (UTC)- How is that a flood? People upload floods of mundane low-resolution photos of all sorts, repetitive high-size mundane photos, and so on – probably hundreds per day on average. There's just a few thousand AI files; 1089 in AI-generated humans – that's near-nothing in Commons. And the depictions of historic/ancient people is an order of magnitude below that. Prototyperspective (talk) 23:00, 6 January 2026 (UTC)
- The vast majority of new AI-generated uploads are deleted, most often under CSD F10. The files which end up categorized - and particularly those which are placed in those "AI-generated by subject" categories - are a small fraction of what's coming in. Omphalographer (talk) 23:22, 6 January 2026 (UTC)
- Good point but it's not a small fraction in my experience (of for over a year regularly tracking all new AI uploads and categorizing probably more than half of AI-related files) but maybe something around as many as are still on Commons.
- If one makes a comparatively large effort to delete low-quality AI media, then it can seem as if it's a flood but there's days where not even one AI image got uploaded and people I think aren't doing a comparable effort to find and delete low-quality drawings and low-resolution-mundane photos. I think we just keep disagreeing on that point but it's not central to my arguments above – especially so since you also say these files are already even speedily-deleted so this new policy is not needed, especially not in this indiscriminate/harsh+unjustified shape. Prototyperspective (talk) 23:38, 6 January 2026 (UTC)
- Re.
there's days where not even one AI image got uploaded
- not recently! There are typically somewhere on the order of 50 to 100 AI-generated images uploaded every day. Omphalographer (talk) 21:28, 9 January 2026 (UTC)
- Re.
- The vast majority of new AI-generated uploads are deleted, most often under CSD F10. The files which end up categorized - and particularly those which are placed in those "AI-generated by subject" categories - are a small fraction of what's coming in. Omphalographer (talk) 23:22, 6 January 2026 (UTC)
- How is that a flood? People upload floods of mundane low-resolution photos of all sorts, repetitive high-size mundane photos, and so on – probably hundreds per day on average. There's just a few thousand AI files; 1089 in AI-generated humans – that's near-nothing in Commons. And the depictions of historic/ancient people is an order of magnitude below that. Prototyperspective (talk) 23:00, 6 January 2026 (UTC)
- I think that most regulars understand the proposed policy not as tool for an absolute prohibition of AI generated depictions of (long-dead) persons, but rather as a quality-assurance tool to stem any influx of such imagery without clear-cut use case. Er, what? No, we don't use policy that says these things are "not allowed" and then argue it's fine because it's not an absolute prohibition. Policy should say exactly what it means; laws saying that X is not allowed and people in the know getting the wink and nod from people also in the know is a good way to piss off users.--Prosfilaes (talk) 03:24, 8 January 2026 (UTC)
Support Without having read this whole discussion, I've looked at the proposed guideline as it stands today, and I agree with the proposal. It is quite restrictive, but I think we need to be restrictive handling such AI-generated images. We should always be extremely cautious and only allow a selection of such images where there is a very good reason for each individual image to host it at all. Gestumblindi (talk) 09:53, 6 January 2026 (UTC)
- One of the controversial points emerged in the discussion is if we are legally required to protect dead people's dignity in the same way to that of living people. What do you think? whym (talk) 10:36, 7 January 2026 (UTC)
- @Whym: Well, legally required? That's a question we could discuss in great detail, as it very much depends on the jurisdiction. Germany, for example, has quite strong postmortal personality rights at least for recently deceased people, while Switzerland doesn't have quite the same concept. I don't know how this is in the US; if we applied the same principles as for copyright, we could require an image (be it real or AI generated) to not infringe postmortal personality rights in the US and in its country of origin... But I think regarding AI generated images, that's a point we don't even need to discuss, as the moral and scope issues should be enough to refrain from hosting such images in most cases. Gestumblindi (talk) 18:49, 7 January 2026 (UTC)
- Yeah, it seems like there is a territory specific component to be considered regarding the living vs dead issue.
- The current proposal's main justification, as it is written, seems to be the moral rights of the people depicted, though. (It's in the first paragraphs.) If there are other, more important rationales, I think the proposal needs to be revised to more clearly include them and argue based on them. Without such (major) revision, I think it would make a more solid argument if we stick with living people within this iteration. whym (talk) 01:20, 11 January 2026 (UTC)
- @Whym: Well, legally required? That's a question we could discuss in great detail, as it very much depends on the jurisdiction. Germany, for example, has quite strong postmortal personality rights at least for recently deceased people, while Switzerland doesn't have quite the same concept. I don't know how this is in the US; if we applied the same principles as for copyright, we could require an image (be it real or AI generated) to not infringe postmortal personality rights in the US and in its country of origin... But I think regarding AI generated images, that's a point we don't even need to discuss, as the moral and scope issues should be enough to refrain from hosting such images in most cases. Gestumblindi (talk) 18:49, 7 January 2026 (UTC)
- One of the controversial points emerged in the discussion is if we are legally required to protect dead people's dignity in the same way to that of living people. What do you think? whym (talk) 10:36, 7 January 2026 (UTC)
Support Strakhov (talk) 18:27, 6 January 2026 (UTC)
Support Ternera (talk) 14:02, 7 January 2026 (UTC)
Support Chorchapu (talk) 01:38, 14 January 2026 (UTC)
Support No to AI slop. Nemoralis (talk) 12:07, 3 February 2026 (UTC)
- Agree. If that word has not lost all its meaning yet, "slop" refers to low-quality and/or useless content. However, AI images of identifiable people aren't all (necessarily) low-quality – it could for example be realistic scenes of ancient cities (such as ancient Alexandria) where an identifiable ancient famous person is shown (such as Cleopatra) and which could be used in documentary videos about the subject as just one of many positive use examples. (And I've already seen public broadcast documentaries that use AI images seemingly made in collaboration with historians which proves such educational use is realistic.)
- .
- The policy as worded is not needed – as low-quality files can be simply deleted and a dignity-policy already exists – and unjustified, which is unprecedented in Commons community decision-making which has developed in imo pretty unhealthy ways that now is e.g. more prone to bias and external efforts to stimulate desired policies such as content deletion policies which I'm sure many external actors such as governments and companies are quite interested in to stimulate (and they also stand to benefit from this one by limiting such depictions to just very few instead of democratized widespread access to it and a general principle of freedom of expression). Prototyperspective (talk) 12:40, 3 February 2026 (UTC)
- I am against the use of visual content generated by AI, even if it is of high quality. Nemoralis (talk) 12:50, 3 February 2026 (UTC)
Oppose as unnecessarily restrictive. If we're using en wiki policy, en:WP:DUE may allow discussion of something with only one reliable source to it; if an AI-generated image is relevant to that discussion, it obviously passes COM:SCOPE and should be able to be uploaded here, even if there is only one reliable source. Additionally, it is unclear whether things like faceswapping are included in this guideline (see wikt:kirkification, for example). Based5290 (talk) 06:25, 12 February 2026 (UTC)
- The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Adding a thing in block notices
[edit]Hello everyone, I would want the community to discuss on a suggestion made by @0x0a at COM:ANU. For my view on it, I happen to agree with them. I quote some new users may not be aware of our blocking policy And our block message box doesn't explicitly state that creating a new account during the block period is not allowed, which might lead them into an endless cycle of block and block evasion. I found it necessary to clearly state this rule in the block message box.
I would say that we can adjust the block notices that states that the user shouldn't create a new account as that will further lead to blocks and bans for socking. Shaan SenguptaTalk 14:08, 7 January 2026 (UTC)
- I second this motion. 0x0a (talk) 14:35, 7 January 2026 (UTC)
- Votes/Comments
Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 15:34, 7 January 2026 (UTC)
Support --Mateus2019 (talk) 16:04, 7 January 2026 (UTC)
Support I have a hard time believing anyone with any amount of common sense or experience with the internet wouldn't know not to just create a new account, but the addition can't hurt. The Squirrel Conspiracy (talk) 18:12, 7 January 2026 (UTC)
Support It removes plausible deniability. JayCubby (talk) 15:47, 9 January 2026 (UTC)
Support For anyone with common sense, sure. But a lot of users who get blocked seem to lack common sense - apparently it isn't as common as the phrase would imply? - so we might as well spell it out. Omphalographer (talk) 08:26, 10 January 2026 (UTC)
- As to why it's not obvious, I wonder if the root cause might be an assumption that most account suspensions are automated (which can be true for other platforms that new users are more familiar with). If so, we might want to let them know that humans (rather than a big, faceless and glitchy automated system) block accounts here and you are expected to engage with those humans when you want to be unblocked. whym (talk) 01:10, 11 January 2026 (UTC)
Support If the obvious needs to be explicitly stated. --Yann (talk) 18:14, 7 January 2026 (UTC)
Support No down-side. - Jmabel ! talk 03:51, 8 January 2026 (UTC)
Support change to {{Blocked}} template, it would be more helpful to newcomers. Thanks. Tvpuppy (talk) 03:20, 9 January 2026 (UTC)
Support Infrogmation of New Orleans (talk) 15:59, 9 January 2026 (UTC)- I won't oppose, but let's keep it short. A 40 word increase would probably be too much. I would go for a short sentence (10-20 words) about it, with a link to a longer explanation if necessary. whym (talk) 01:08, 11 January 2026 (UTC)
- @Whym, would you like to suggest a draft? Or maybe @Tvpuppy, you did good work with DR notice. Anyone else is also invited since this has been supported so far. Shaan SenguptaTalk 05:09, 11 January 2026 (UTC)
Support And I'll proceed forth making a text suggestion (with some placeholder words where Wikitext would collide with the quote template) in the new section below. Grand-Duc (talk) 03:38, 12 January 2026 (UTC)
Support I see this as a step in the right direction. Wolverine X-eye 09:37, 12 January 2026 (UTC)
Support Gbawden (talk) 10:50, 24 January 2026 (UTC)
Support I am main worker in unblock requests and I meet too often users, who have created a sockpuppet while blocked, thinking, that this is allowed. Taivo (talk) 11:32, 24 January 2026 (UTC)
Text renovation workbench
[edit]Current text in {{Blocked}}:
You have been blocked from editing Commons for a duration of TIME for the following reason: REASON.
If you wish to make useful contributions, you may do so after the block expires. If you believe this block is unjustified, you may add UNBLOCK REQUEST below this message explaining clearly why you should be unblocked. See also the block log. For more information, see Appealing a block.
I suggest the following additions (in italics here):
You have been blocked from editing Commons for a duration of TIME for the following reason: REASON. A human reviewed your contributions and found them against Commons' rules.
If you wish to make useful contributions, you may do so after the block expires. Creating a new account while this block is in force is in itself a blockable offense and can lead to a permanent exclusion! Do not try to game the system. If you believe this block is unjustified, you may add UNBLOCK REQUEST below this message explaining clearly why you should be unblocked. See also the block log. For more information, see Appealing a block.
— Preceding unsigned comment added by Grand-Duc (talk • contribs) 03:38, 12 January 2026 (UTC)
- Alternative suggestions for the italicized passages:
- An administrator has reviewed your contributions and found them to be against Commons' rules.
- Creating a new account while this block is in force is itself a blockable offense and may lead to permanent exclusion from Commons.
- However, neither that nor the wording above works for an indef-block, where we need something more like Creating a new account while this block is in force is itself a blockable offense and makes it very unlikely that your block will ever be rescinded."
- And when we block accounts for being sockpuppets, even that is not on the mark; in that case we either can omit this or need something clarifying that this sockpuppet account will almost certainly never be unblocked.
- Jmabel ! talk 05:53, 12 January 2026 (UTC)
- The point made by Whym above at Revision #1145790913, with
I wonder if the root cause might be an assumption that most account suspensions are automated (which can be true for other platforms that new users are more familiar with)
stirred me. I think that it'll be worth to underline that humans do the blocking, and it's not necessarily clear that something called administrator is actually human, when going by experiences in social network or online game environments. - Indeed, I did not think about sockpuppets. But Jmabel's suggestion is in my opinion a sound starting point to work on or adapt outright. About socks: either a boolean switch "sock Y/N" would be needed, and isn't there {{Sockpuppet}} available already? Grand-Duc (talk) 07:02, 12 January 2026 (UTC)
- {{Sockpuppet}} goes on the user page, not the user talk page, and is not addressed to the user themself but to admins and others acting in a quasi-administrative capacity. - Jmabel ! talk 20:42, 12 January 2026 (UTC)
- @Jmabel, @Grand-Duc (pings to both of you since you took the initiative to suggest format) and others, are we gonna let this thread, supported by every participant so far, die? Shaan SenguptaTalk 17:26, 23 January 2026 (UTC)
- @Shaan Sengupta: I'd like to see my issues about indef-blocks and blocking sockpuppets addressed. It seems to me that I identified a real problem. Maybe we need distinct block templates for these distinct cases? (I don't think adding a new parameter to this longstanding template is a good idea.)
- Also, before moving forward, we should make sure that all non-subst'd instances of {{Blocked}} are subst'd, because otherwise we are altering the record of what warning someone received. - Jmabel ! talk 21:29, 23 January 2026 (UTC)
- I've added a notice on COM:AN. Maybe more participation can help. Shaan SenguptaTalk 04:55, 24 January 2026 (UTC)
- I agree with @Jmabel above. Probably we could borrow a few templates from en-wiki to move out this generic one. For example, en:Template:Uw-ublock (username), en:Template:Uw-socialmediablock (for those that just upload personal images), en:Template:Uw-copyrightblock (copyvio blocks) and en:Template:Uw-sockblock (socking). signed, Aafi (talk) 08:22, 24 January 2026 (UTC)
- I would
Support importing enwiki templates. That will fix a lot of things. And we can add the above mentioned proposal in our templates here. Or maybe we can create a unified block template and give it a |type=parameter or something like that for different types of block? Shaan SenguptaTalk 08:44, 24 January 2026 (UTC)
- I would
- I agree with @Jmabel above. Probably we could borrow a few templates from en-wiki to move out this generic one. For example, en:Template:Uw-ublock (username), en:Template:Uw-socialmediablock (for those that just upload personal images), en:Template:Uw-copyrightblock (copyvio blocks) and en:Template:Uw-sockblock (socking). signed, Aafi (talk) 08:22, 24 January 2026 (UTC)
- I've added a notice on COM:AN. Maybe more participation can help. Shaan SenguptaTalk 04:55, 24 January 2026 (UTC)
- I use
{{#invoke:Autotranslate|autotranslate|1=|base=indefblockeduser}}for sockpuppets, and{{#invoke:Autotranslate|autotranslate|1=|2=|base=Blocked user}}for short term blocks. These come from the User Messages gadget. Yann (talk) 09:06, 24 January 2026 (UTC)
- The point made by Whym above at Revision #1145790913, with
Change expectations of (and criteria for becoming) a license reviewer
[edit]- Should we change the criteria for becoming a license reviewer by striking "be familiar with restrictions that may apply, such as freedom of panorama." And replace it with "show basic competency in copyright restrictions, such as by having a history of importing files which are not copyright violations or tagging copyright violations for deletion."
- And should we change the procedure of license review to emphasize that checking if a user is really the copyright holder is generally only necessary for license reviewers to take when they are suspicious of signs of a copyright violation, but is not always necessary on every image.
Rationale: We currently have a massive backlog of items needing a license review. At the present moment, it is over 80,000 items in the surface category alone, with tens of thousands more in the subcategories, and growing. We currently have very stringent rules requiring license reviewers to essentially certify items as free of copyright violations. This has led to license reviews of files taking much longer (requiring extremely thorough investigations), and led to fewer people being trusted with the right.
This all neglects the original purpose of the right, which was to create a record showing that an item was uploaded at the specific location under the specified license, in case the item is later deleted. The purpose was never to certify an item as copyright-violation free. This means that many items in our backlogs may end up needing to be deleted if the item is deleted as the external website, while at the same time the size of the backlog means that copyright violations that could be caught are ignored anyway. Keep in mind that we created license reviewer bots that handle this task on certain websites that have nearly no ability to check for copyright violations in the same fashion, and they have been granted this user right (so it isn't as though the license review confirmation ever truly confirmed it was copyright violation free.
Original discussion hereAplucas0703 (talk) 17:36, 14 January 2026 (UTC)
tagging copyright violations
=> "accurately tagging copyright violations"? - Jmabel ! talk 18:56, 14 January 2026 (UTC)
Support Completely agree with Aplucas0703 (no objections against adding "accurately"). Maybe the preceding discussion should be mentioned? Gestumblindi (talk) 19:04, 14 January 2026 (UTC)
Partial support - the first point looks useful. And it doesn't feel like a change, only like an alternative wording which actually better describes the needed prerequisites for the job. Do we need to go through a full RfC for that wording change? Oppose the rationale, second paragraph. I don't want to see humans restricting themselves to bot-like tasks. So, I do not get what you want / propose with your second point. What would be the exact change you're advocating for, Aplucas0703? Regards, Grand-Duc (talk) 01:18, 15 January 2026 (UTC)
- The purpose of point 2 is to speed up license reviews by clarifying that a license review is not intended to be an extensive check for a copyright violation, but rather a check that a file was uploaded under the stated license. License reviewers may choose to do a deeper check if they see clear red-flags of copyright violations. They are not expected to catch every possible copyright violation or certify an item as copyright-free, as others down the road are expected to be able to find such violations.
- The reason for this leniency is that the current size of the backlog is so long that many copyright violations aren't being checked anyway in addition to not having a basic license review (which could be helpful if a discussion about it arises and the file was deleted in the meantime). We both want what is best for preserving the integrity of copyright on Commons, so I actually think this is better overall for copyright in that regard, since we can't expect one person to do it perfectly. This places more faith in the community as a whole to find copyright violations and use the information gathered in the basic license review to help them decide that. Aplucas0703 (talk) 02:08, 15 January 2026 (UTC)
- If we are going to limit the license review task to simply verifying that the source claimed to offer the license, we probably will want to adjust Template:LicenseReview to allow a status that effectivly means something like "I confirmed that the site says it offers the license, but someone with copyright expertise ought to have a closer look because it feels a little fishy." - Jmabel ! talk 06:33, 15 January 2026 (UTC)
- The purpose of point 2 is to speed up license reviews by clarifying that a license review is not intended to be an extensive check for a copyright violation, but rather a check that a file was uploaded under the stated license. License reviewers may choose to do a deeper check if they see clear red-flags of copyright violations. They are not expected to catch every possible copyright violation or certify an item as copyright-free, as others down the road are expected to be able to find such violations.
Support I agree with the proposal. Yann (talk) 10:12, 15 January 2026 (UTC)
Comment We need YoutubeReviewer working again. That alone removes a few hundred thousand files from the queue, much more efficient than manual sorting. All the Best -- Chuck Talk 19:34, 16 January 2026 (UTC)
- There are about 8,000 files needing a YouTube license review. Aplucas0703 (talk) 00:38, 17 January 2026 (UTC)
- @Alachuckthebuck: The code on github referenced by Commons:Bots/Requests/LicenseReviewerBot is 404-compliant. :( — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:24, 17 January 2026 (UTC)
URAA DRs by country
[edit]I've opened this discussion about the creation of a new categorization of URAA requests by country of origin. Friniate (talk) 17:38, 18 January 2026 (UTC)
- no need to ask, you can just do it. Bedivere (talk) 17:44, 18 January 2026 (UTC)
Feedback requested at Commons talk:Derivative works#Proposal: Add a section to better describe when a derivation creates a new copyright
[edit]Hi, I've started a discussion on the Derivative Works policy talk page proposing that we improve our explanations for when a derivation contains sufficient creative work to be treated as a DW, vs when it is a simple copy. I don't think it's ready for a !vote yet, but feedback and improvements are invited to move it along in that direction. Thanks in advance, - Consigned (talk) 12:54, 31 January 2026 (UTC)
Mandatory labeling of original descriptions
[edit]There are many users importing content from third parties who also import the original description from the source. These descriptions are often not properly reviewed if they conform with our COM:NPOV or even our Commons:Civility and Commons:Harassment policies. I therefore suggest that we make it mandatory to mark original descriptions as such. If original descriptions are not labeled as such, this might be considered as a COM:NPOV, Commons:Civility or Commons:Harassment violation by the uploader if the original description violates these rules. Original descriptions they are reviewed as being correct and not violating any policy do not have to be labeled as just copied original descriptions. GPSLeo (talk) 17:58, 4 February 2026 (UTC)
Oppose that version above, because there's a small but really significant part that bothers me too much in the proposal as it currently stands. That said, I would wholeheartedly
Support an amended proposal: making it mandatory to include original descriptions when importing third party media, without exceptions. I only se benefits and no downsides, such a step could even be included in the importing tools like Flick2Commons. If the original description is nonsensical, like "DSC_####", then no harm is done when tagging that as original, and such tagging is actually needed when dealing with propagandistic stuff. Another plus: tagging descriptions as stemming from the original imagery source is a good practice for any archive, so we should do the same. Regards, Grand-Duc (talk) 18:30, 4 February 2026 (UTC)
- The vast majority of descriptions from sites like Flickr are neither useful to us nor have any bearing on NPOV. Personal stories, nonsense, copy-pastes of Wikipedia articles, full-length press releases, text copied from elsewhere (which is then a copyvio problem), etc - sometimes multiple paragraphs long. We should be pushing users to add better descriptions here, not mandating copying of the junk. Any mandatory copying and/or tagging of original descriptions should be restricted to sources or topics where there is a realistic chance of NPOV issues. Pi.1415926535 (talk) 19:52, 4 February 2026 (UTC)
- While I agree that is frequent, I do not think it is the vast majority. - Jmabel ! talk 20:48, 4 February 2026 (UTC)
- In some circumstances, it is impossible to import original descriptions from Flickr, YouTube, or other sites because they contain URLs or other keywords which are blocked by Wikimedia's spam filter or other abuse filters. Omphalographer (talk) 23:53, 4 February 2026 (UTC)
- On a related note, it would be great if youtu.be was removed from the spamfilter since youtube.com isn't blocked and it just makes video imports, including via video2commons, more laborious having to replace youtu.be links with youtube.com links and having to edit the description more broadly. Prototyperspective (talk) 12:32, 5 February 2026 (UTC)
- Seconded. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 15:00, 5 February 2026 (UTC)
- On a related note, it would be great if youtu.be was removed from the spamfilter since youtube.com isn't blocked and it just makes video imports, including via video2commons, more laborious having to replace youtu.be links with youtube.com links and having to edit the description more broadly. Prototyperspective (talk) 12:32, 5 February 2026 (UTC)
- The vast majority of descriptions from sites like Flickr are neither useful to us nor have any bearing on NPOV. Personal stories, nonsense, copy-pastes of Wikipedia articles, full-length press releases, text copied from elsewhere (which is then a copyvio problem), etc - sometimes multiple paragraphs long. We should be pushing users to add better descriptions here, not mandating copying of the junk. Any mandatory copying and/or tagging of original descriptions should be restricted to sources or topics where there is a realistic chance of NPOV issues. Pi.1415926535 (talk) 19:52, 4 February 2026 (UTC)
Oppose would just overcomplicate things, not affect files already uploaded, there's no real need for this, and it would be difficult to implement. Descriptions can be changed and often should be. Instead of what's proposed here, I'd suggest info is added to some help page(s) and/or elsewhere that people should put copied descriptions into quotes. Either <blockquote> or " quotes. When importing files via tools, one currently has to add them manually. A problem with automatically adding them is that if the description is changed by the user, it's not a correct quote anymore. Prototyperspective (talk) 19:14, 4 February 2026 (UTC)
Comment I do a lot of work re-curating content from various GLAMs in my region (Pacific Northwest U.S., especially Seattle). There is a lot of complexity here, and I'm not sure what would be best. I don't have time right now to lay out all of my thoughts on this, but I do want to say one thing: there are upwards of 2000 images where I've fed information back to on or another GLAM and they've changed their title and/or description to match what I fed back to them. Any process we set up needs to be able to deal with that situation. - Jmabel ! talk 20:56, 4 February 2026 (UTC)
- i want a technical improvement first. we should have an com:sdc field for "original description" that can take much longer and multi-paragraph text (unlike the present "caption") as input. RoyZuo (talk) 17:39, 5 February 2026 (UTC)
- or actually, we just need a "description" field. one size fits all.
- which can be assigned qualifiers like "source/author" being photographer/editor/museum curator...
- imagine a documentary video, which was given different descriptions by the cameraperson, the producer/director, the film company, the museum that curate it... RoyZuo (talk) 17:44, 5 February 2026 (UTC)
- This is not possible as Wikibase can not store long texts. GPSLeo (talk) 18:35, 5 February 2026 (UTC)
- then better fix that. https://www.loc.gov/item/2021666304 loc repository can have a long summary in their database, so should commons. RoyZuo (talk) 01:02, 6 February 2026 (UTC)
Add autopatrol to file movers
[edit]Special:ListGroupRights here you can see what rights groups have. file mover doesnt have autopatrol now.
i briefly searched the archives and found the following. Commons:Village_pump/Proposals/Archive/2012/08#c-Philosopher-2012-08-04T23:26:00.000Z-Bundled_rights_(Filemover)_-_+1 2012 decision to do exactly this but not acted upon?
similarly jdx also suggested the same Commons:Village_pump/Proposals/Archive/2019/02#c-Jdx-2019-03-18T08:24:00.000Z-Add_rights_from_the_autopatrollers_user_group_to_the_rollbackers_user_group:_vot RoyZuo (talk) 17:32, 5 February 2026 (UTC)
