This page is used for technical questions relating to the tools, gadgets, or other technical issues about Commons; it is distinguished from the main Village pump, which handles community-wide discussion of all kinds. The page may also be used to advertise significant discussions taking place elsewhere, such as on the talk page of a Commons policy. Recent sections with no replies for 30 days and sections tagged with {{Section resolved|1=--~~~~}} may be archived; for old discussions, see the archives; recent archives: /Archive/2026/01/Archive/2026/02.
Latest comment: 26 days ago13 comments4 people in discussion
I have an app (ios / android) that is displaying images from wikimedia commons (Hotlinking) that has been out for over a year. About a week ago I started seeing a massive increase of 429 - Too many Requests responses. Nothing on our side has changed, the requests come from user devices (and IPs) and the 429 responses come after about 10-20 requests (request volume hasn't changed either). This seems to be a problem for other projects as well as evidenced by this reddit thread and this bug report. I am sending a user agent with requests as recommended here. Bbbub (talk) 11:14, 5 January 2026 (UTC)Reply
I understand that wikimedia needs to guard it's resources against misuse but to me it doesn't seem like normal ratelimiting, we see these 429 responses with very low requests (10-20), other developers (in the linked thread) mentioned getting it for single requests. Additionally other requests sent directly afterwards might go through. Bbbub (talk) 08:01, 6 January 2026 (UTC)Reply
@Bbbub Oh, another thing is, that only specific thumbnail sizes are still allowed. If you are manipulating URLs to get your image to fit to a certain dimension, you will also see this erroring with 429s. —TheDJ (talk • contribs) 12:43, 5 January 2026 (UTC)Reply
I am working with different thumbnail size brackets (..., 640, 800, 960, ...) for different screen sizes. I am pretty sure I got the sizes originally from Wikimedia, but let me know if there has been a change or a source for the allowed thumbnail sizes so I can verify. Bbbub (talk) 07:37, 6 January 2026 (UTC)Reply
Another problem. You should set the loading=lazy attribute on the img, so that you only download images that are actually likely to be within view. This is especially needed for things like this kind of gallery code. —TheDJ (talk • contribs) 12:46, 5 January 2026 (UTC)Reply
This is a pretty important one for you specific use case. If you are requesting 30 images that all still need to be generated, even though only 15 are visible, then you will quite quickly run into rate limits. —TheDJ (talk • contribs) 13:25, 5 January 2026 (UTC)Reply
We're only loading the images that come into view, and we're seeing 429 responses with 10-15 images loading already, and in the linked reddit thread others described seeing it for one-off loads as well. Bbbub (talk) 07:43, 6 January 2026 (UTC)Reply
I'm the other poster. Perhaps ironically, thumb.php was an unideal workaround I figured out. upload.wikimedia.org would return 429s while thumb.php would not. ~Kevin Payravi (talk) 08:21, 6 January 2026 (UTC)Reply
Chiming in as another external developer who ran into this: hotlinked images loaded clientside (in-browser) were getting 429ed, even in situations where I was only loading a few images. I realized that my webapp was not sending in the Referrer header when requesting hotlinked images. After restoring the Referrer header, I could load thumbnails just fine. So I'm guessing the lack of the Referrer header was one heuristic that (combined with others) resulted in the 429 errors. More details on the Phab thread. ~Kevin Payravi (talk) 08:21, 6 January 2026 (UTC)Reply
I corresponded with WMF developers using the e-mail address given in the error message, and the problem was fixed almost immediately. – Jonesey95 (talk) 21:00, 19 January 2026 (UTC)Reply
Hi, since today i notice that this special seach query is not delivering the expected results. "Georg Scholz" -deepcategory:"Georg_Scholz" . Expected to exclude but it does include the cat. I tested this in 2 browsers. Has anything been changed? Maybe one more of the available special search buttons of the categorie MORE menu is affected. Like: deepcategory:"Georg_Scholz" should show all but shows nothing. Looks like these 2 actions are inversed somehow. Can this be fixed please? Thanks Peli (talk) 15:11, 15 January 2026 (UTC)Reply
It is not working for me either. Strangely for me, deepcat only works for categories with only one word in its title (i.e. title without any spaces), all other category names does not work. Thanks. Tvpuppy (talk) 20:36, 15 January 2026 (UTC)Reply
Thanks, but on Wikimedia commons in categories I still see these errors of not showing any deepcategory items on empty query, and of not properly excluding the category contents upon using -deepcategory. I'm not sure if I need to purge a page or something, beacause I tested it in 3 browsers and the reslults are: never the requested items, in all 3 cases. Peli (talk) 18:49, 17 January 2026 (UTC)Reply
Today I found a bug in cases where the query has an "&". Example: deepcat:"Pellerin & Cie" Try a deepcat search in deepcat:"Pellerin & Cie" results in this search: deepcategory:"Pellerin_ No matches because wrong folder name. The folder name to look in is truncated after the "&". I know there must be a patch for this since my branched button that automates " " -deepcat:" " ("Pellerin & Cie" -deepcat:"Pellerin_&_Cie") has such a patch and works well at this time. The fork is hosted and patched by User @Samwilson: . I hope the regular button/function can be patched as well, without affecting opposite functions. i.e Search outside of category. Peli (talk) 11:15, 22 January 2026 (UTC)Reply
So not only a caching issue, but one not fixed by action=purge (which I had already tried). So just wait and hope it "heals"? - Jmabel ! talk19:08, 16 January 2026 (UTC)Reply
Latest comment: 25 days ago2 comments2 people in discussion
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Updates for editors
The tray shown on Special:Diff in mobile view has been redesigned. It is now collapsed by default, and incorporates a link to undo the edit being viewed, making it easier for mobile editors and reviewers to take action while keeping the interface uncluttered. [1]
The Global Watchlist lets you view your watchlists from multiple wikis on one page. The extension continues to improve — it now automatically determines the text direction (ensuring correct display of sites with unusual domain names) and shows detailed descriptions for log actions. Later this week, a new permanent link for page creations and CSS classes for each entry element will be added. [2][3][4][5]
View all 32 community-submitted tasks that were resolved last week. For example, the previously observed issue in Vector 2022, where anchor link targets were obscured by the sticky header, has now been addressed. [6]
Updates for technical contributors
As mentioned in the October 2025 deprecation announcement, MediaWiki Interfaces team will begin sunsetting all transform endpoints containing a trailing slash from the MediaWiki REST API the week of January 26. Changes are expected to roll out to all wikis on or before January 30th. All API users currently calling them are encouraged to transition to the non-trailing slash versions. Both endpoint variations can be found, compared, and tested using the REST Sandbox. If you have questions or encounter any problems, please file a ticket in Phabricator to the #MW-Interfaces-Team board.
The WMF Wikidata Platform team (WDP) has published its January 2026 newsletter. It includes updates on the legacy full-graph endpoint decommissioning, the User-Agent policy change, the monthly Blazegraph migration office hours, and efforts to reduce regressions caused by the legacy endpoint shutdown. As a reminder, you can subscribe to the WDP newsletter!
The Wikimedia Hackathon Northwestern Europe 2026 will take place on 13-14 March 2026 in Arnhem, the Netherlands. Applications opened mid-December and will close soon or when capacity is reached. It's a two-day, technically oriented hackathon bringing together Wikimedians from the region. Hope to see you there!
A cluttered unoverseeable sea of blue that's exhausting to go through – really an improvement?Great to see some work on the Global Watchlist – it includes a button to see a diff of all unseen changes with a click. I don't know how people can use the Watchlist without such a button. However, it's still not really usable in practice because unless you check all your Watchlist items each and every day there are seas of blue of username-links and that diff button is at an always varying location after the article title, impeding opening up many diffs one after another. Here's two wishes calling for this to be changed; I don't think the Global Watchlist is usable really in its current shape so I hope somebody will eventually fix these problems (at least via options):
I have just uploaded the file 5-Cell Schlegel Diagram.stl. All the previews have been rendered correctly. However, when I display the file in the media viewer, it is rendered incorrectly: basically, the entire object is displayed in pure white, as if the object was completely overexposed. The display problem occurs under Linux as well as under iOS. Is there a way to diagnose why this is happening? What can I do to make the STL file render correctly in the media viewer? Any suggestions will be much appreciated. Carsten Steger (talk) 14:34, 20 January 2026 (UTC)Reply
I get the same incorrect rendering under Windows with Firefox and Internet Explorer. Since the preview rendering with 3d2png works correctly, I suspect this is a bug in the 3D extension of the media viewer. Maybe the bug can be spotted by comparing the code of 3d2png to that of the 3D extension? Carsten Steger (talk) 07:58, 21 January 2026 (UTC)Reply
Help with changing links based on display language
@Immanuelle: Those are not in the SVG. Those are just ImageNotes (which contain Wikitext). You can do pretty much anything inside an ImageNote that you can do anywhere else on a page in Commons, and you do it exactly the same way. - Jmabel ! talk01:22, 21 January 2026 (UTC)Reply
Oh, interesting, the two ImageNotes (which were what I instantly noticed as clickable) can serve something of the same function, which is why I was confused. I know there are ways to do multilingual SVGs, and I imagine the issues are the same for links in the SVG as for text, but I'm out of my depth there. You might ask one or more of the people who participated in #How to specify a SVG file's default language? above, which is clearly a closely related issue. - Jmabel ! talk01:50, 21 January 2026 (UTC)Reply
For links directly in the SVG when viewed directly, you can use the <switch> element. However this will use the browser's language which is sometimes different than the language the user expects (And different from whatever site they are viewing). If you want to use this in Wikipedia, I would suggest using an Image map instead. Bawolff (talk) 23:35, 21 January 2026 (UTC)Reply
One option is to link to Wikidata items instead of wiki articles. For example, instead of linking to https://en.wikipedia.org/wiki/R%C4%81gar%C4%81ja, link to its Wikidata item: Rāgarāja (Q1188900). The en.Wiki article will give you the link on the right-hand side. The user can then scroll down to the bottom of the Wikidata page and click a link to his favorite wiki.
A more involved option is a URL that takes the Wikidata item and the HTTP Accept-Language header. The URL would then redirect the user to the user's preferred wiki article if it exists. If it doesn't exist, then the URL would redirect to the Wikidata item.
Latest comment: 13 days ago5 comments3 people in discussion
Hi, I have a VBA script that I use to download the photos, in practice now usually a recent subset of them, that I have uploaded to Wiki Commons. (I won't go into the reason why I want to download photos that I have already uploaded, but there is a reason.) This used to (say a year or two ago) work perfectly even for hundreds and hundreds of images. Now it struggles to do a dozen or so, before I am blocked, I believe for making too many requests in too short a time. Then eventually it will start working again and let me do a few more, slowing down to a trickle. I suppose this blocking must be a feature recently introduced? I have never been blocked from browser access, however. Even immediately after the script being blocked, I can open pages in the browser (from the same IP address). Anyway, I can try adding pauses between downloads. I've tried five or ten second pauses but it seems to make little difference so far. Anyway, I wouldn't have thought my volume was particularly unreasonable on the scale of server traffic generally. Anyone got any info about what is and isn't permitted, or what I can do to mitigate this issue? Thank you. ITookSomePhotos (talk) 22:02, 21 January 2026 (UTC)Reply
Original images. I have found that a thirty second pause between downloads (typically 3-5MB each) helps to prevent rapid blocking, but this does not work indefinitely. Eventually I still get blocked. When this happens, I am now looking at and honouring the "retry after" value, per the documents that Bawolff linked to. I have never seen any value other than 1000 (seconds). Even waiting this long, or even a bit longer for good measure, does not guarantee success. I have seen at least three of these in a row, i.e. total 50 minutes' wait. It seems a bit extreme, given that I am, with 30 second gaps, not putting any more load on the server than normal browsing though a browser. ITookSomePhotos (talk) 18:58, 2 February 2026 (UTC)Reply
How to categorize pages using the OWIDslider gadget?
Since relatively recently, there now is a tool – or more use of that tool – which allows the upload of hundreds of OWID files at once (which is great!)
There also now is a gadget that uses these hundreds of files to allow semi-interactive viewing, the OWID Gadget (also great!)
However, categories are missing for both the files and those pages. Now I'd like to also categorize these; I was thinking:
The files belonging to one visualization (one topic) should be in one category which can then be categorized as described above and get a charts and a maps subcategory – this would best be done at upload via the OWIDUploader tool. This is especially so because there usually are hundreds of files with one per country which each should go into Category:Charts by country. This is why I'd only lake to do the next things and hope more things get done at User talk:Jmh649/2#OWID uploads with false and missing categories, including for files already uploaded. Any updates on this @Doc James: ? (e.g. I saw Category:Our World in Data graphs by country has been created but I still often come across uncategorized OWID files)
The interactive visualizations (example below) should also be categorized into categories like e.g. Category:Meat statistics. Would the best way for that be to categorize pages in Category:Pages using gadget owidslider (indexed also in Commons:List of interactive data graphics)? If so, how could one comprehensively categorize these (e.g. see which of these haven't yet been categorized into a topical cat) and isn't it a problem that the page with so many files loads quite long and incomplete? If not, what would be the better way – what about creating some kind of separate page with just the visualization which is ready to use, loads quickly, and can more easily be categorized (e.g. a page like OWID: Daily meat consumption per person in the meat statistics cat instead of Template:OWID/daily meat consumption per person)?
I posted this on your normal talk page. This is just the name of your archive page which is in your responsibility to rename or keep as is. Okay.
There are many examples of these files missing categories, here's a recent one I came across which may also elucidate the complexities involved here (e.g. do we want these hundreds of files to show up in deepcategory wall-of-images views or somehow separate them?):
Regarding separation or enabling the user to exclude the hundreds of files from OWID mass-uploads, maybe it would be good to better make sure files are put into Category:Uploaded by OWID importer tool so the user can exclude these and maybe add a note about this at the relevant places. I think views/scans usually should only show one or two page/file per set – a page with the interactive data graphic. Prototyperspective (talk) 12:20, 23 January 2026 (UTC)Reply
With respect to OWID: this is a namespace and would require permission to put content in it. If you can request and get consensus for that would be happy to put these all there by bot.
So that graphs by country cat is populated by a bot; if so that sounds good – then there's less need to set the cat at time of upload and it could explain why you say "it might be easier to do more categorization after the upload rather than during". With respect to OWID: this is a namespace Don't understand what you mean. Basically the main issue of the thread is that the files are missing in the topic categories like Category:Meat statistics or Category:Renewable energy statistics and because it could clutter the pages and the category's search results, that may be good but the interactive visualization should be in these categories – so for example a page OWID: Daily meat consumption per person that loads quickly and contains just the interactive visualization embedded above. The OWID there is not the namespace but the page title (just standardized to have the OWID prefix) so it would be in the "Main"/gallery namespace in this case. If this is a good/best solution to that issue, then it would still need all those visualizations indexed (mostly or entirely?) in Category:Pages using gadget owidslider to get one of such page each. Prototyperspective (talk) 18:47, 27 January 2026 (UTC)Reply
All the pages with the OWID visualizations are listed here Category:Pages_using_gadget_owidslider About 500 of them.
Could you please address the questions – I linked that category in the comment you're replying to. Please see the part around and isn't it a problem…above. I thought it would be a good idea to discuss this instead of me figuring out myself what the best solution would be and then unilaterally implementing whatever it is without discussion but this is not much of a discussion. If you were saying you think a new separate namespace would be a good approach, that's not clear and I don't understand why that's needed or would be better than e.g. to create the mentioned kind of mainspace pages. Prototyperspective (talk) 00:09, 28 January 2026 (UTC)Reply
What I am saying is we cannot use the title "OWID:Text" unless we get approval from the community at large as that is a new namespace.
I personally think the Category:Pages_using_gadget_owidslider is sufficient for me to find all these. If you want just the interactive graphs placed on a separate page we could do that, and would just need a naming structure.
Thanks for clarifying. Okay so it can't be colons in the prefix as that for pages in the "main"/gallery namespace indicates/requires a new namespace. One could also have the prefix be OWID – or OWID/ for example – the particular suggestion was just an idea/illustration.
I personally think the Category:Pages_using_gadget_owidslider is sufficient for me to find all these as elaborated above, the issue is making these findable for those looking for statistics of a certain topic via either browsing the associated category page or the search. Just having them in that category doesn't achieve that – it's about surfacing these visualizations to people who could be interested in them / to which these are useful.
If you want just the interactive graphs placed on a separate page we could do that do you know if there's a way to do that for all the pages in the Pages_using_gadget_owidslider cat? (maybe using some bot or tool) would just need a naming structure that's what the OWID: visualization name was all about – do you have another suggestion or which prefix(/suffix) would you prefer?
but not sure we need this as explained in the post starting the thread, these pages take long to load and are problematic to edit. It requires downloading lots of data and HotCat does not work on these so one can't edit the usual way and with autocomplete. Opening the wikitext editor takes long and is prone to failing. Also linking to these pages is probably confusing where people wonder what all the many files on the page are about and/or browse these instead of the interactive visualization. An alternative approach would be to remove the files from the template page but I'm not sure if that causes the visualization to break. The pages could link to each other. Prototyperspective (talk) 13:57, 28 January 2026 (UTC)Reply
Asking a wider audience is what I was/am doing here; I thought and think this is the most appropriate large-audience place to discuss this. I pinged you in the comment not because this was addressed to just you. Okay so like this OWID/daily_meat_consumption_per_person? Yes, exactly, thanks. I edited the page to make the visualization larger and center it, what do you think? And also is there a way to create these pages for all the visualizations? Then I'd remove the cats on the large template pages and add them to these pages similar to how files are categorized (nearly all files in Category:Our World in Data except for these OWID Importer mass uploads are by now categorized by subject so this is basically the remaining issue to complete the topical categorization of these data graphics). Prototyperspective (talk) 13:19, 29 January 2026 (UTC)Reply
Thanks, but is that module for SDC in addition to Wikidata? The data values of the 2 files mentioned (at least those in the Summary/Information section) are pulled from SDC, although the properties are from Wikidata. I found core.editAtSDC and core.editAtWikidata in the meantime. Perhaps both modules are to be edited? whym (talk) 12:01, 28 January 2026 (UTC)Reply
Latest comment: 19 days ago9 comments7 people in discussion
Is it possible create an edit filter that would check file EXIF "user comment" field? Many copyvios have user comment "Screenshot" and it would be useful to automatically tag such uploads. Preventing is not necessary as there are legitimate files as well. MKFI (talk) 12:21, 25 January 2026 (UTC)Reply
Let me hasard a guess: it's not possible, unfortunately (at least right now). Perusing Commons:Abuse filter and mediawikiwiki:Extension:AbuseFilter/Rules format, I didn't see any mention of (pre-made) EXIF-related variables. If, and that's a large if, the software could allow for arbitrary user-defined variables, then it's conceivable that such an EXIF-specific rule may be developed, but that's way beyond my own abilities. Pinging @Lustiger seth: whom I know as being knowledgeable about the edit filter technique, he may be able to provide a technical insight. Regards, Grand-Duc (talk) 13:26, 25 January 2026 (UTC)Reply
Currently i don't see any possibility for using exif variables.
There are file attributes such as file_mime, file_size, file_width, but nothing for file content (incl. exif data). So i guess you should create a feature requesting ticket at https://phabricator.wikimedia.org. The only related ticket i found was https://phabricator.wikimedia.org/T170251, but that's only related vaguely.
@Doc James: that's a really dirty way of removing metadata and, forcibly, reducing the quality. May I suggest that you edit the EXIF directly? There are hints for that in COM:Exif, I'd say "Use EXIFTool and EXIFToolGUI" (as long as you're on Windows), handy pieces of software for that. Regards, Grand-Duc (talk) 10:56, 27 January 2026 (UTC)Reply
Latest comment: 19 days ago1 comment1 person in discussion
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
All users with registered accounts can now use passkeys for two-factor authentication (2FA). Passkeys are a simple way to log in without using a second device. They verify the user's identity using a fingerprint, face scan, or a PIN code. To set up a passkey, first set up a regular 2FA method. Currently, to log in with a passkey, users must also use a password. Later this quarter, passwordless login will allow users to log in with a single click and a passkey. Users with advanced rights will also be required to have 2FA enabled. This is part of the Account Security project.
Unregistered contributors on blocked IPs or blocked IP ranges can now interact on-wiki to appeal a block by creating a temporary account to appeal a block on the user talk page, unless the "prevent this user from editing their own talk page" is enabled. This solves the problem of logged-out users unable to use the default unblock process via user talk page. [7]
View all 20 community-submitted tasks that were resolved last week. For example, the Two-Factor Authentication (2FA) methods description on the management page has been updated. It is now clearer and easier for users to understand and make use of. [8]
Updates for technical contributors
A new AbuseFilter variable, account_type, has been added to provide a reliable way to determine the account type being created in the createaccount and autocreateaccount actions. As part of this change, the variable accountname has been renamed to account_name, and accountname is now deprecated. Edit filter managers should update any filters that use hardcoded account type checks or the deprecated variable. [9]
Image thumbnails that are requested in non-standard sizes, and using non-standard methods such as direct requests to upload.wikimedia.org/… will stop working in the near future. This change is to prevent ongoing external abuse by web-scrapers and bots. Some users with custom CSS/JS, Interface Admins who can fix gadgets and local skins, and Tool-authors, will need to update their code to use standard thumbnail sizes. Details, search-links, and examples of how to fix them, are available in the task.
Latest comment: 16 days ago3 comments3 people in discussion
Is it only me, or are annotations broken for anybody else as well? I neither see any existing annotations nor does the button "Add note" appear. And no, I didn't switch it off in my settings. Chianti (talk) 22:20, 28 January 2026 (UTC)Reply
Works for me. That's about the last thing to load, via JavaScript, after a whole bunch of other JavaScript, so if any other JS is broken, or if your JS times out, it could be missed. If you are enough of a techie to use the browser console, you may find a coherent error message, possibly very tangentially related to the symptom. - Jmabel ! talk05:49, 29 January 2026 (UTC)Reply
Works for me in Chrome, Safari, and Firefox, but not Vivaldi, and it doesn't give any errors. While testing, I found that annotations won't show if the window is too narrow (less than about 830 pixels wide), so if your browser is blocking access to the window size (or you're using it on a phone in portrait orientation), that might be the problem. --Carnildo (talk) 22:17, 29 January 2026 (UTC)Reply
Latest comment: 10 days ago12 comments3 people in discussion
As you probably know, there is not a lot of WMF developer support for Commons these days. The one thing I'm aware of that is moving forward is that they have a contractor working on video2commons. At meta:Product and Technology Advisory Council/Unsupported Tools Working Group#January 2026 they report some recent work; what is probably of most general interest is support for playlists and user-library uploads, better subtitle extraction, and several aspects of support for importing from YouTube.
From what I can tell, the one contractor currently working directly on this is doing good work; still, I continue to believe that Commons could benefit greatly from far more WMF dev support. It is good that they've shown that certain work can be successfully contracted out, but it is equally clear that most cannot, and that we need people at least a handful of developers who bring or, well, develop an understanding of Commons, not just of some individual tool. (With reference to "equally clear that most cannot", I was party to some of the discussions of where to focus this resource, and several higher priorities were rejected because handing them to a contractor would set that contractor up to fail.) - Jmabel ! talk18:01, 29 January 2026 (UTC)Reply
@Bawolff: I don't remember everything that went by, but I believe improvements to Video2Commons were the other leading candidate. I remember there was discussion of the Upload Wizard (apparently a bit of a mess internally, which—being a developer myself—amazes me: it is not doing anything I think should be complicated); CropTool was discussed, but it looks like a new group of volunteers have taken that on successfully; there was some discussion of MediaViewer, which I pretty much don't use so I had and have nothing intelligent to say on that front; there was definitely talk about getting seriously behind one of the batch uploading tools, probably PattyPan, and I think that was one that was rejected for this go-around on the basis of too large (there was only funding for three person-months); I also remember there was some discussion of rewriting Cat-a-lot (another where I personally have little to say: I'm pretty fine with it as it is).
I could be missing something here. When I was brought into this, I was promised it would be at most a few hours of my time in any given month, and I was in a reactive/reviewing mode and did not take copious notes. The most proactive thing I did here was to put about 8-10 tools on their radar that had not been on their initial list.
FWIW, my own ideas as to where I'd put resources are very different. My own view is that the single most valuable thing we could have is a full-time combination PM and volunteer coordinator to help coordinate volunteer-based work on tools, almost the opposite use of resources from a short-term, outside, contract-based developer. My overall take: not nearly enough WMF resources are devoted to tool support or to Commons; given that limitation, what they are doing is sane, but probably not optimal; "sane" is a big improvement on where we were a few years back. - Jmabel ! talk18:23, 3 February 2026 (UTC)Reply
not nearly enough WMF resources are devoted to tool support or to Commons agree if by "tool support" you mean tool development and I note that I can't think of many things that WMF could do that would be effective and relates to Commons that isn't technical development and doesn't involve it as the main part I remember there was discussion of the Upload Wizard would be great if development on it continues – see the ideas and requests at Commons_talk:WMF support for Commons/Upload Wizard Improvements (eg the recent threads near bottom).
It is good that they've shown that certain work can be successfully contracted out, but it is equally clear that most cannot probably there is still lots of things that can be contracted out and so far I've not seen explanations for which things can't be (the reasons for why). Other ideas are hiring more devs (remote, in chapters, and locally in the US), m:Wish bounties, and facilitating & aiding more open source volunteer development, e.g. via the concrete feasible ideas that I've outlined here. Prototyperspective (talk) 18:50, 3 February 2026 (UTC)Reply
A couple of issues with bringing in [relatively short-term] contractors for s/w development generally, nothing here specific to WMF:
In general, it is easiest to use contractors for very well-defined tasks that can reasonably be expected to be completed in a specific time frame, easily be tested to determine task completion, and not need a great deal of ongoing maintenance after that. The farther you get from that in any respect, the harder to use contractors.
Using contractors neither leverages nor builds up organizational memory. Unless a contractor is very fast on the uptake, it is far more crucial than for hiring a long-term employee to hire someone who is already strong in all of the relevant technologies, so in making a hire you may have to trade that off against overall skill level. Also, the lower the overall quality of any existing body of code, the harder it is for a contractor to take on doing almost anything with it, compared to someone who is in it for the long term.
Re wish bounties - bug bounty (not the security type but the type where you pay people to work on specific feature requests) has failed in basically every open source project it has been tried. I would suggest avoiding that unless people look carefully into why it failed in other projects and avoid falling into similar traps. Bawolff (talk) 09:24, 4 February 2026 (UTC)Reply
Interesting; could you provide some useful link(s) regarding this here or at the talk page of the page linked? (examples where it failed, info on why, etc) Haven't heard that and thought I heard of some quite successful cases. There's probably many ways this can be done – for example an org could offer bounties for implementation of any of a large set of wishes or Wikipedians could crowdfund development and specify ranked wishes etc. I'd love to donate to get tangible wiki wishes implemented under certain circumstances. Prototyperspective (talk) 12:07, 4 February 2026 (UTC)Reply
After googling, maybe its not as clear cut as i thought. I did find https://ziglang.org/news/bounties-damage-open-source-projects/ and https://www.dokuwiki.org/bounties . That said, it is an idea people try to push in the open source world a lot (mostly companies who are hoping to get a cut) but there are very limited success stories. Its important to keep in mind that the going rate for mediawiki consulting is somewhere in the range of $50-$150 USD/hr (or higher), so usually bounties don't make financial sense for the people doing them as they are usually not high enough. Of course often people have other additional motivations in addition to or instead of $$$, but keep in mind adding low amounts of money can sonetimes back fire; many people who would do things for free would refuse to do things for small and medium amounts of money. The other problem with bounties is sometimes there is misalignment - people try and do the bare minimum and you end up with unusable crap or worse the person who created the bounty doesn't understand the problem and makes it for the wrong thing. In many situations the technical part is only a small part of the work, and getting someone to do it does not really move the needle on actually making it happen (I agree 100% with Jmabel on the benefits of a good PM). As an aside, i have a vauge memory of User:Eloquence trying to setup a bounty for DPL or RSS mediawiki features a long time ago. However i can't find any reference to it online so maybe that is only in my imagination. Bawolff (talk) 14:16, 4 February 2026 (UTC)Reply
P.s. Don't let me disuade you though, even though i have doubts about the value of monentary rewards, getting a list of feature requests to a state where you could run a succesful bounty program (i.e. you have a ranked list of features where the requirements are well documented and all appropriate stakeholders have bought in to the design choices, plus dedicated CR resources) would be hugely valuable. People would probably start doing things off that list without any monentary compensation. The primary blockers for volunteer devs is that nobody knows what the community actually wants, nobody really wants to put in the social effort to get consensus on the technical requirements, nobody really wants to do things if its a gamble whether you will get timely code review. Fix all that, and you wont need to pay people. Bawolff (talk) 14:43, 4 February 2026 (UTC)Reply
Interesting feedback, thanks. I don't see the point with the bounties not making financial sense; it's some monetary incentive vs no such incentive. I was well aware of the potential backfiring issue but I'm not sure if that's what you meant: I was thinking of if some people get money for their development, the volunteers who do it without compensation would feel less motivated and/or would implement the issues with bounties instead of the other things (latter is not necessarily all bad). I don't think people doing just the bare minimum would be a substantial problem as it would still have to pass review and validation and just doing the minimal implementation is fine. Do the more advanced stuff later and just build the minimum thing first of all is a great principle. This also makes it much less likely there's work on things that will later be scrapped. There are lot of open bugs and wishes that are such a list of feature requests already, such as many of the good-first-bug things or issues with lots of subscribers who wait for it to finally be implemented etc. I don't see many devs working on many of these; sth that could be done is to make such lists more visible; e.g. via the aforementioned banner that links to a landing page where such wishes and issues could be listed where devs can see how some of them are interesting to them and sth that would be worth implementing. That nobody knows what the community actually wants is false and probably most issues don't need some "consensus on the technical requirements". Timely code review is something the WMF can and should improve, for example by hiring more devs that do these or again facilitating more volunteers to join these efforts. I don't say wish bounties are needed; it's one option and it's especially an option since the WMF is still not showing much interest in actually increasing development. I again would donate (under certain circumstances) to a non-profit organization that uses >95% of its donations for actual tangible software development of concrete issues & wishes and such could be achieved via wish bounties so I think there may well be some potential there. Prototyperspective (talk) 16:27, 4 February 2026 (UTC)Reply
By backfiring I mean psychologically. There is a weird thing for some people, where once you put a dollar number on something people start to value it very differently. If its free, they center the work on more intangible values, but once a dollar value is assigned they stop thinking about the intangible values and start viewing it solely through a money lens. Admittedly this is a bit just my personal opinion - it could be wrong or maybe only apply to a small minority of contributors. Re, contributors getting jealous of those getting money - while I agree, i think that ship has long sailed with all the WMF paid people. By minimum, i don't mean Minimum-viable-product (Which i agree is a good thing), I mean code that is at minimum level of quality which might be difficult to review and cause manitenance problems (or other externalities later). Re issues with lots of subscribers. Often these have hidden issues or reasons for not doing them that need to be resolved. For a bounty program to be successful, hidden issues need to be surfaced, since the bounty-completer is likely to be an outsider and won't have the unspoken context. Regardless, I would encourage people interested in this idea to keep a top 5 list of what they think the most viable & valuable tasks to turn into bounties would be. It would give us something concrete to talk about even if nobody has ponied up any cash yet. Bawolff (talk) 22:26, 4 February 2026 (UTC)Reply
I noticed that the "hist" link on Special:Watchlist and Special:Contributions now says "history". Is there a reason this was changed? I'm not sure it's necessary and the fact it's wider than "diff" makes it look very awkward. - The Bushranger (talk) 06:05, 31 January 2026 (UTC)Reply
I found that weird, too. But my main reaction is: I'm sorry anyone put any time into changing that, and hope they don't then waste more time changing it back. - Jmabel ! talk23:47, 31 January 2026 (UTC)Reply
Latest comment: 9 days ago4 comments4 people in discussion
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Updates for editors
The "Page information" feature, which gives validating information about a page (example), now automatically includes a table of contents. If there is a local MediaWiki:Pageinfo-header page created by individual users, it can now be removed. [10]
View all 21 community-submitted tasks that were resolved last week. For example, VisualEditor previously added bold or italic formatting inside link descriptions, making the wikicode complex. This has now been fixed. [11]
Updates for technical contributors
There was no XML dump on 20 January. Additionally, from now on, dumps will be generated once per month only. [12]
The MediaWiki Interfaces team removed support for all transform endpoints containing a trailing slash from the MediaWiki REST API. All API users currently calling those endpoints are encouraged to transition to the non-trailing slash versions. If you have questions or encounter any problems, please file a ticket in phabricator to the #MW-Interfaces-Team board.
Users are reminded that the Wikimedia Foundation has shared some guiding questions for the July 2026–June 2027 Annual Plan on Meta and Diff. These focus on global trends, faster and healthier experimentation, better support for newcomers, strengthening editors and advanced users, improving collaboration across projects, and growing and retaining readership. Feedback and ideas are welcome on the talk page.
Not mentioned here, as it isn't really relevant for most projects, but I fixed a number of video pipeline problems.
Transcode pipeline had files that had crashed but for which the crash had not been registered properly. phab:T385270
Some Opus files were not properly recognized phab:T414643
Some Ogg files had duration 0. After this week that should be fixed phab:T414348
Additonally, a few things are currently in progress. They are either fixed or about to be fixed in getID3, which should trickle down to Commons somewhere in the next few weeks, including:
very small files would crash
midi files without tempo events had no duration phab:T414645
chunked uploads of webm sometimes crashed phab:T403213
streamed (youtube) webm does not have duration phab:T357035
And of longer term interest in the image area
MediaWiki will now directly render SVGs by default. This is not active for WMF, but the hope is that by moving this forward we will someday get there.
JPEG XL files can soon be recognized, which is a prerequisite for parsing and thumbnailing them at some point phab:T270855
I hope that makes people happy. And if you are interested in working on problems similar to do, i encourage you to do so and to tag me as reviewer of your work on gerrit. —TheDJ (talk • contribs) 11:36, 4 February 2026 (UTC)Reply
Latest comment: 10 days ago1 comment1 person in discussion
{{int:Talkpagelinktext}} is a template for translating "talk", but when using a custom signature with templates, it automatically substitutes every template, even the translation template, thus losing the translation. Is there a workaround for this? HyperAnd [talk] 13:18, 5 February 2026 (UTC)Reply
Creative Commons Search Portal now uses MediaSearch by default
Latest comment: 9 days ago1 comment1 person in discussion
A few weeks ago, I discovered that the search.creativecommons.org uses a regular text search instead of the media search interface. The text search is not well suited for finding media files, it only searches for a keyword and it does not show any media files. Well, the good news is that this is now fixed. I also corrected the project name there, which was "Wikipedia Commons". Nemoralis (talk) 16:55, 6 February 2026 (UTC)Reply
@Kevin Payravi: I can see several sizes of thumbnail, but on the file page itself the preview is blank. If that's just me and not a more general problem, then it's odd but not important. - Jmabel ! talk19:55, 7 February 2026 (UTC)Reply
Latest comment: 7 days ago4 comments3 people in discussion
Since 2025, when accessing Commons without using a VPN, it has sometimes been inaccessible for a day, sometimes for several days, and occasionally, clicking "Publish changes" after editing fails to go through. May I ask what is causing this? Huangdan2060 (talk) 14:16, 7 February 2026 (UTC)Reply
I filed a task. When special GeoTIFFs like "digitale Geländemodelle" (digital terrain models) are uploaded, we see a plain white preview. This is irritating and may lead to premature deletion requests. When I put this file in QGIS, I see graphical information. This task may be useful, as Commons acquires more and more GeoData. --PantheraLeo1359531 😺 (talk) 20:56, 7 February 2026 (UTC)Reply
Latest comment: 6 days ago4 comments3 people in discussion
Could anyone please help obtain the full-resolution images found here? I'm able to upload the as-displayed images, but—if you click on the images—you're able to zoom in further. I also tried dezoomify (website & extension), but it did not work. There are for use on the article Serpent labret with articulated tongue and, as works by the U.S. federal government, should be in the public domain. Thanks, --Usernameunique (talk) 17:16, 8 February 2026 (UTC)Reply
Perhaps I'm missing something, but are you referring to this image version (2000x1321)? I just copied the image URL from the page you linked and remove the size parameter (&max=980) to get the image in its actual size. Thanks. Tvpuppy (talk) 23:02, 8 February 2026 (UTC)Reply
Thanks, both. Tvpuppy, I tried that too, but it's still smaller than the zoomed-in version—so unless you can zoom in beyond full resolution (i.e., you zoom in more, but it just gets fuzzier), it seems there's a higher-resolution version. --Usernameunique (talk) 01:02, 9 February 2026 (UTC)Reply
Latest comment: 5 days ago1 comment1 person in discussion
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Updates for editors
Logged-in contributors who manage large or complex watchlists can now organise and filter watched pages in ways that improve their workflows with the new Watchlist labels feature. By adding custom labels (for example: pages you created, pages being monitored for vandalism, or discussion pages) users can more quickly identify what needs attention, reduce cognitive load, and respond more efficiently. This improves watchlist usability, especially for highly active editors.
A new feature available on Special:Contributions shows temporary accounts that are likely operated by the same person, and so makes patrolling less time-consuming. Upon checking contributions of a temporary account, users with access to temporary account IP addresses can now see a view of contributions from the related temporary accounts. The feature looks up all the IPs associated with a given temporary account within the data retention period and shows all the contributions of all temporary accounts that have used these IPs. Learn more. [13]
When editors preview a wikitext edit, the reminder box that they are only seeing a preview (which is shown at the top), now has a grey/neutral background instead of a yellow/warning background. This makes it easier to distinguish preview notes from actual warnings (for example, edit conflicts or problematic redirect targets), which will now be shown in separate warning or error boxes. [14]
The Global Watchlist lets you view your watchlists from multiple wikis on one page. The extension continues to improve — it now properly supports more than one Wikibase site, for example both Wikidata and testwikidata. In addition, issues regarding text direction have been fixed for users who prefer Wikidata or other Wikibase sites in right-to-left (RTL) languages. [15][16]
The automatic "magic links" for ISBN, RFC, and PMID numbers have been deprecated in wikitext since 2021 due to inflexibility and difficulties with localization. Several wikis have successfully replaced RFC and PMID magic links with equivalent external links, but a template was often required to replace the functionality of the ISBN magic link. There is now a new built-in parser function{{#isbn}} available to replace the basic functionality of the ISBN magic link. This makes it easier for wikis who wish to migrate off of the deprecated magic link functionality to do so. [17]
A new global user group has been created: Local bots. It will be used internally by the software to allow community bots to bypass rate limits that are applied to abusive web scrapers. Accounts that are approved as bots on at least one Wikimedia wiki will be automatically added to this group. It will not change what user permissions the bot has. [20]
The MediaWiki Users and Developers Conference, Spring 2026 will be held March 25–27 in Salt Lake City, USA. This event is organized by and for the third-party MediaWiki community. You can propose sessions and register to attend. [21]
Looks like you get a tool intended to let you merge any content from the two different file pages and then to turn one file into a redirect to the other file. However, it doesn't look to me like it's a very clear UI (e.g. both file descriptions are editable; which one will be used once the merge takes place?)
(All of this is a normal method to complete a category which could be applied in many cases, maybe at some point more systematically or semi-automatically or routinely.)
Latest comment: 3 days ago2 comments2 people in discussion
I would like to upload a batch of orthophotos of Thuringia. Each tile comes as ZIP file with TIF, meta and tfw file. Is there a way to let the Wikimedia servers extract only the TIF to be uploaded? Having them down- and reuploaded on my PC probably takes some time. Thanks! --PantheraLeo1359531 😺 (talk) 16:15, 12 February 2026 (UTC)Reply