i am collecting links from indo-aryan & dravidian articles. let me know for any changes needed. page is located here. Gunyam (talk) 17:14, 10 February 2023 (UTC)[reply]
Noted, thanks. That is a massive amount of work and time, I can pick away at it when time is available. If possible it would help to know the link counts so I can focus on the biggest sites first. -- GreenC04:59, 13 February 2023 (UTC)[reply]
It makes no sense to state they were never married and then follow it up with he never married. It is basically stating the same thing twice and obviously if he never married, they never married. So why does the article need both sentences? 107.115.153.65 (talk) 01:02, 6 April 2023 (UTC)[reply]
Guess I was lazy because I didn't care about the article that much and someone has been trying to delete it anyway, but yeah that went beyond the pale. If I have the time or interest will try again. -- GreenC23:00, 30 April 2023 (UTC)[reply]
So, back in 2020 an RfC was held to decide whether the Estado Novo should be labeled "fascist" or not, it always was labeled "fascist" on Wikipedia, but a user challenged that, however no consensus was reached, and eventually it was decided to maintain the status quo. So, unless another RfC is held that ends with a different result, shouldn't the category be restored? -- 2804:248:FBF7:1900:4514:DF54:AECD:F097 (talk) 03:04, 20 May 2023 (UTC)[reply]
Maybe? Not sure I read it that way, I'll need to look at the RfC more closely. Bring it up on the talk page. Three users have reverted, explain your position and give people a chance to respond. If they don't respond, link to the talk page discussion in the edit summary when adding the category back, after a couple days. -- GreenC03:39, 20 May 2023 (UTC)[reply]
User:Fabrikator thank you so much for that information you are the first to notice it. Obviously that changes everything. I wonder how long they have been up? The outage was about 1.5 years I think. A good place to discuss this if your interested is the WebCite talk page. -- GreenC18:07, 24 June 2023 (UTC)[reply]
Can WebCite archives be archived into other archiving services? Is an effort underway to do this for all the pages which we link to WebCite archives of? -sche (talk) 21:07, 25 June 2023 (UTC)[reply]
WebCite built some anti-"theft" stuff into their system which makes that difficult but not impossible. Wayback won't work. Archive.today is the best bet but need to clear it with them first see how they would want to proceed. There are over 2 million WebCite links across all Wiki projects (non-unique), only around 35k on Enwiki. If someone can figure out a way to generate a WARC file from WebCite it might be possible to work with Wayback to import them because Wayback will be the most reliable long term storage. -- GreenC21:21, 25 June 2023 (UTC)[reply]
What do you feel is the most efficient process for cleaning up content and sources after your bot nukes a blacklisted source, such as you did for hugedomainsdotcom? Searching insource:"hugedomains.com" yields the result, "There were no results matching the query", indicating a clean removal. What remains as the content and source, or is the content unsourced?
My question applies to the 800+ articles containing healthlinedotcom which is now blacklisted. I have begun removing healthline one by one, but as you know, this is a tedious, long-term job.
Is there a possibility that your nuking bot could add a replacement [citation needed] tag? This would seem to be a common need once blacklisted sources are removed, so may need some Village Pump discussion, for which I could offer a proposal, if you think it's warranted.
Short answer yes it could add though I'm not sure how useful it will be in all conditions, and if 2+ cites exist it would add it anyway. We could make a bargain, I'll do the nuking if you do the cleanup - every edit needs to be manually checked as this work is error prone (hugedomains was about 7% I think). Can go slow ie. 50 edits, pause, etc.. It's a lot less work than 100% manual. As for VP, it will have trouble with consensus, in my experience, particularly when it's a bot vs. human question and there is a high probability of bot error, as in this case. Usually better off just doing it, fix errors as you go, and explain to anyone who raises concerns. The RfC is pretty clear these are deprecated ie should be removed. The manual check of every edit greatly reduces any concerns about automated editing. -- GreenC01:40, 7 July 2023 (UTC)[reply]
Thanks for a good solution. I'm ok doing 50 at a time (835 current uses). Because most remaining healthline refs are in a place where WP:MEDRS should have been used, I think the ((cn)) is benign, so please apply it - I'll check, and in many cases will have to find a MEDRS source.
There will be a section at WP:URLREQ and the edit summary will link there. I need to retool the bot and will ping when the first couple are done. -- GreenC03:24, 7 July 2023 (UTC)[reply]
Thanks for this. Does the bot leave a comment about what bad site it removed? 'Cos the actual bad claims need checking and likely removal too - David Gerard (talk) 11:53, 7 July 2023 (UTC)[reply]
The article will be discussed at Wikipedia:Articles for deletion/Mark Zuckerberg book club until a consensus is reached, and anyone, including you, is welcome to contribute to the discussion. The nomination will explain the policies and guidelines which are of concern. The discussion focuses on high-quality evidence and our policies and guidelines.
Users may edit the article during the discussion, including to improve the article to address concerns raised in the discussion. However, do not remove the article-for-deletion notice from the top of the article until the discussion has finished.
but when will you want to finish canceling my changes?? there is one thing explain to me that annoy my changes?? you know that I can block your changes, I add information on the buildings, it's not that I vandalize the page, but when are you going to finish it with these changes ?? I can do what I want on the Wikipedia page I can fix it or change it, if you write bullshit on Wikipedia I'll edit it like on the tianjin ctf finance center the skyscraper is not 510 m high but 530 m and that's it come on!!! 93.147.210.109 (talk) 11:48, 3 August 2023 (UTC)[reply]
It's because you are a long-term vandal who occasionally makes good edits but mostly makes bad ones then hide your intentions behind the good edit. Here is an example Special:Diff/1167916821/1167921388 but as you can see here the HAAT is 845 not 85. You are malicious, you do stuff like this all the time. For this reason, myself and many others revert every single edit you make, we don't bother checking if it is good or bad, everything you do is immediately reverted. Now, tomorrow, next month and next year. I look forward to reverting you, and look forward to doing so for months and years into the future. If you have a problem with that, you can open an ANI request and we can draw more community attention to your case. -- GreenC15:23, 3 August 2023 (UTC)[reply]
Greenc you too are a long term vandal who occasionally makes good edits but mostly makes bad ones, so you hide your intentions behind the good edit. Here is an example Special:Diff/1167916821/1167921388 but as you can see here HAAT is 530 metres not 1600. You are being mischievous, always doing stuff like that. Because of this, I and many others undo every single change you make, we don't bother checking if it's good or bad, everything you do is immediately undone. Now, tomorrow, next month and next year. I look forward to restoring you and look forward to it for months and years into the future. If you have a problem with this, you can open an ANI request and we can get more community attention to your case. Then go to ... 93.147.210.109 (talk) 07:56, 4 August 2023 (UTC)[reply]
Hello GreenC, I saw you reverted my edit on the article Bluewaters Island. May I ask why you reverted it? I'm quite new to Wikipedia and don't know all the rules yet. It would help me a lot if you could tell me the exact reason for the revert.
Baconbeam20 (talk) 08:37, 29 August 2023 (UTC) Baconbeam20 (talk) 08:37, 29 August 2023 (UTC)[reply]
Because your edit says "grammar" but you deleted a large block of text including all the sources for the paragraph, and related context about other wheels. It looks like vandalism. Not saying it is, only how it looks when the action and edit summary are so far apart, typical vandals try to hide their actions behind benign edit summaries hoping no one looks more closely at what they actually did. The question is why you deleted all those sources and other information from that paragraph. -- GreenC14:45, 29 August 2023 (UTC)[reply]
Hello, thanks for your quick reply. As I said, I'm quite new to Wikipedia and English isn't my native language, so I'm never sure what to write in the edit summary. I understand your reasons for the revert, and hope it makes me a better Wikipedian.
Do you know much about Quill and Scroll? I came across it in 'random article'. I can see they've been around for a long time but I'm not finding many sources to work off of and I know you know books and stuff Graywalls (talk) 15:45, 31 August 2023 (UTC)[reply]
Thanks for finding the archive for the URL for Dominika Lasota. I thought I tried both Wayback and Archive.today and both failed, but obviously, the article is now properly archived.
Someone (maybe you?) told me some time ago about a safer anti-bot protection when there's a need for a note that robots may try to delete, but I didn't store a convenient link to the conversation. What's the recommended alternative to prevent a robot from removing just some small piece of text, such as in this case?
How about something like <ref>((cite ... |archive-url= |url-status=live))((void | this url is unarchivable))</ref> so that the void template is external to the cite template? Or would <ref>((cite ... |archive-url= |url-status=live))<!-- this url is unarchivable --></ref> be preferred/more robust? Boud (talk) 22:25, 4 September 2023 (UTC)[reply]
@GoingBatty: Thanks - I've put that on my user page so I don't lose it :), since it could be useful sometimes, but it's not what I was thinking of. I'm fairly sure there's something even less intrusive - just to protect a small section of wikitext (cite or other), i.e. just one of the parameters with a comment immediately afterwards. I don't want to stop bots updating/checking links. Boud (talk) 14:32, 9 September 2023 (UTC)[reply]
Sorry, I missed seeing your comment earlier. Other than cbignore, you could set the URL to "permalive" at https://iabot.org (Manage URL data -> Manage individual URL). It will treat it as permanently alive ie. never assume it's dead thus never add an archive URL. You can also delete any existing archive URL there, in case the archive URL is not working. -- GreenC15:42, 9 September 2023 (UTC)[reply]
Hi GreenC. I've been working on the List of Wikipedias article to add references, update the list, etc. The NUMBEROF data from GreenC bot is exceptionally helpful. Would it be possible for the bot to also incorporate the launch date for each Wikipedia edition? It would be a lifesaver. (It wouldn't necessarily need to work for the oldest Wikipedias if that information isn't available for every Wikipedia, "n/a" would be fine.) Daniel Quinlan (talk) 20:38, 10 September 2023 (UTC)[reply]
Great, glad you find it useful, also pinging User:Johnuniq since it was collaborative. I looked into that a long time ago, but didn't have an authoritative source for the dates. I recall some dates kept changing, but my memory is hazy. Basically I don't want to create static date data for the bot, then require updating it each time someone requests. If the list was both human-editable and machine-readable, hosted somewhere, possibly NUMBEROF could have that option. Perhaps a page on meta, or even a Wikipedia article like "List of Wikimedia project start dates". -- GreenC21:13, 10 September 2023 (UTC)[reply]
How are launch dates known? I see them at List of Wikipedias where they have been entered manually. Do we know of a central list somewhere? They can't change that often so I would be happy to make a list suitable for Commons structured data where they could be edited (a little painfully). The bot could read that list and I could tweak NUMBEROF to use it. Johnuniq (talk) 01:53, 11 September 2023 (UTC)[reply]
Some dates seem to be based on the date of the first edit. Others are based on the date the site was activated. Many of the dates are not well-sourced. I would like to replace all of the "trust me bro" dates with a date that is cited via either a reasonably solid reference or data from commons.wikimedia.org. Being able to archive references for posterity is also important.
If there are other timestamped events like the first account creation or the first log entry of any kind, it might be worth pulling those separately. It's tempting to just use edit date across the board, but it's dubious trying to make a decision without seeing the data. Daniel Quinlan (talk) 03:29, 11 September 2023 (UTC)[reply]
It's not just source reliability. Wikipedia:Anniversary only lists dates for 52 Wikipedias. There are 335 Wikipedias. That's why I suspect that the date of first edit (or perhaps first activity) might be more practical for most Wikipedias. Daniel Quinlan (talk) 07:25, 11 September 2023 (UTC)[reply]
There's probably a Quarry query or 335 that could be run, aling the lines of SELECT timestamp FROM revisions WHERE rev_id=1, but I'm entirely certain that the current database schema is not original. For example, here on en.wp, Special:Diff/1 is dated 26 January 2002. Folly Mox (talk) 08:28, 11 September 2023 (UTC)[reply]
Mailinglist, and bug tracker history are probably the most likely places for a source. nothing authoritative for ALL of them in a single place I fear. —TheDJ (talk • contribs) 07:08, 11 September 2023 (UTC)[reply]
The good news is, since there is currently no authoritative source, it's green fields to create one, based on criteria of our choosing. Preferably criteria that is algorithmic and machine-retrievable. Like data from the API. It won't always be objectively accurate, but it's at least a rationale that is consistent. There can be exceptions for manual overrides. m:Wikipedia:List of Wikipedia birthdates or something would be great. -- GreenC15:57, 11 September 2023 (UTC)[reply]
What do you think of the idea of creating a bot to extract from Jstor links in CS1 templates the stable link, place that into |jstor=, and then purge |url=, |archive-url=, |archive-date=, and |url-status=? Something similar for URLs to WorldCat also seems prima facie reasonable. Ifly6 (talk) 18:19, 21 September 2023 (UTC)[reply]
Given some of the not-particularly-responsive responses to various, I think I see why you've been so reluctant to bring anything up. It does not seem easy to convey the actual issue – redundant and useless archive links – in a comprehensible manner: one person thinks I want to ban all of Jstor, one person thinks I want to ban all archives, and one person thinks I want to ban all URLs. Ifly6 (talk) 01:19, 26 September 2023 (UTC)[reply]
I thought your main/best/initial point concerned the availability, or not, of the check box in the IABot tool. It's tempting to see the "big picture", but it's too complicated with so many branching issues. When faced with complexity, the solution is to break it down into manageable pieces. A precise question is should we allow the Check Box on enwiki, and if so, under what conditions such as rate limiting, permissions, guidance. This question has come up repeatedly but no one has expressed it before as well you did on the Link rot talk page. If you were to RfC it, your challenge would be to keep on-point, avoiding all this other derailing discussion. The question should be simple, black and white, avoid the temptations of larger issues and getting it all done in one big step, which is a dead end. (BTW this is also how politicians get legislation done over many generations, small pieces at a time that people can agree to when the time is right). -- GreenC04:09, 27 September 2023 (UTC)[reply]
Go to the History tab of the article, at the top is "Fix dead links". It will take you to another website where you Allow login. Then choose 'Run Bot' then 'Fix single page'. It will run the bot on that page. -- GreenC20:03, 24 September 2023 (UTC)[reply]
(Semi-lurker lurking.)@Sameboat: It is not necessary to add archive links for live URLs, which your recent IA Bot edits triggered (1, 2). The check box, which has the description Add archives to all non-dead references (Optional), does not create or update archives. It merely adds the already-existing archive URLs into the article mark up; if those live links became dead the bot would automatically fill in the archive links anyway. Ifly6 (talk) 02:49, 25 September 2023 (UTC)[reply]
I noticed a lot of the single-stars are in ((webarchive)). Tracked it back to here, which I did 7 years ago during conversion of the old template, when the date field was missing. The old ((wayback)) template didn't support most of those parameters, it's a GIGO, my bot ignored them and did the best it could. The webarchive template supports the '*', it says in footnote #2 "archive index". I could just go through and add the earliest archive available, but, there are cases where people do this intentionally, I don't want any trouble. There are about 2,600 cases of single star, I have ignored them, not worth the trouble too much context sensitivity. Possibly most of the webarchive cases could be converted since they were most likely done by my bot.
The |website=web.archive.org is caused by reFill converting a bare archive URL to a cite web. I've been fixing them for years but they just keep coming. If I had a way to locate them I can run the bot on those pages. Extract the source URL and move into the url field. Move archive URL to archive-url. Add archive-date. Delete |website=web.archive.org - feel free to add this to Citation bot also.
The trailing '*' is another context sensitive thing it's best to work around it and leave in place. Same with if_ and other types of tags. I've had too much trouble with people complaining when they are removed. I save these flags then operate on a clean link without the flag then restore the flag later. -- GreenC04:08, 2 October 2023 (UTC)[reply]
I didn't know either until the discussions at Elizabeth Holmes and recent discussions at SBF. The problem is the word has 2 senses, one is objectively true as used in this sense, someone who was convicted of fraud. The other sense is slang and derogatory, like the fraudster pizza delivery guy forgot the cheesy fries. People see that second sense most often in their lives and when they see it on Wikipedia it doesn't ring right to them. I tried fighting to Keep fraudster at Holmes but was ultimately overwhelmed by those who consider it too derogatory, or at least mistaken for being derogatory. Since other solutions exist, the consensus discussion was to remove it. -- GreenC23:18, 5 November 2023 (UTC)[reply]
I didn't even think about the slang form of it (though I have long known that it exists), and the consensus is quite understandable when taking that into account. JeffSpaceman (talk) 23:31, 5 November 2023 (UTC)[reply]
I understood the rationale of expansion of short to full url for archive.is but I noted it may create a mismatch in the archive date parameter (see example Miss Grand Singapore 2023). While this might be one off, is it possible during the expansion for the bot to check the archive date of the archive url and update the archivedate parameter at the same time? Appreciate the bot and your work. Thanks! JASWE (talk) 07:49, 21 November 2023 (UTC)[reply]
No because the process is running on 100s of wikis and it's not aware of the endless variety of templates and date formats in use, which is needed to modify the dates. The date mismatches is a separate problem that requires a different kind of bot. WaybackMedic is capable of and often does fix them on enwiki, but it's not designed to run full-auto unattended, I would need to make a new bot for that. BTW my bot didn't "create" the date mismatch, the date mismatch already existed, my bot made it more visible by expanding the URL shortening (one reason we don't use URL shortening it hides problems). -- GreenC00:26, 22 November 2023 (UTC)[reply]
Noted on the technicalities and huge varieties of the cite templates and henceforth the limitation of the bot. Yes, the bot did not "create" the mismatch, I should have said "expose" the mismatch. Thanks on the explanation! JASWE (talk) 08:51, 23 November 2023 (UTC)[reply]
I regularly make minor edits like this, mostly prompted by your bot's good work on User:Certes/Backlinks/Report. The errors are caused by faithfully copying imperfect citations from Trove. For example, please follow the first citation in that diff: https://trove.nla.gov.au/newspaper/article/243444331 and click "Cite" (second icon in the left column, below "A". A left sidebar should appear; scroll down to "Wikipedia citation". It contains a link to The Herald, a dab, but our article on that source is The Herald (Melbourne). Anything in that very specific format, ((cite news)) with |newspaper=[[The Herald]], |location=Victoria, Australia and |via=National Library of Australia could be fixed automatically with minimal risk. Dozens of other Trove sources have similar mistakes. Some of the bad links are disambiguation pages which gnomes will detect and fix, but a bot could be more accurate and save them a lot of trouble. Others are articles on a different topic which are harder to spot and may go unfixed, e.g. Arrow for The Arrow (newspaper). I've made a draft list of the latter in User:Certes/Trove/fix. I can easily create a similar list for the dabs (it's basically those entries in User:Certes/Trove where column 2 is a dab). However, I may need to remove a couple of ambiguous cases where there were multiple newspapers in the same "location" (AU state) with similar titles (or, for extra credit, we might distinguish them by date). I'm not sure it's quite ready for BOTREQ yet but does this look like a suitable job for a bot? Certes (talk) 21:25, 22 November 2023 (UTC)[reply]
I should add that I have an unused Toolforge account and most of the skills to write the bot myself, but would probably need a mentor. I know from bitter experience of training others that holding my hand might take an experienced bot-herder longer than just writing the thing themselves, but it could be a useful investment if you're short of people to write other bots in the future. I think the algorithm should be pretty simple, and efficient as we need only consider pages which link to the offending page and have changed recently. Certes (talk) 21:34, 22 November 2023 (UTC)[reply]
Hello! Voting in the 2023 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 11 December 2023. All eligible users are allowed to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
Hi User:Certes and User:GoingBatty: Toolforge is discontinuing the GridEngine very soon so I am moving the reporting tool to my home server. I know that sounds sketch but it should be more reliable than Toolforge has been. I run other infrastructure stuff. I don't want to copy the data cache over, it will generate the cache from scratch. This will require two runs, since it requires two copies of the backlinks data to compare for differences. You may see some odd results. Any problems let me know. Thanks. -- GreenC02:45, 2 December 2023 (UTC)[reply]
User:Certes if you want any suggestions for automating the copying of files from a local computer up to Toolforge, I finally figured out how to do this. I run the tool process locally, and copy the output to the ~/toolname/static directory where it's visible on the live web (html files, data files). An rsync command that mirrors local directories -> toolforge. -- GreenC16:58, 3 December 2023 (UTC)[reply]
Thanks; that's a good idea. I've got my Toolforge login enabled (though not running any tools yet), so that might give access to rsync too. As for the backlinks, yesterday's was suspiciously short and today's has a second entry for Julius Caesar at the bottom (both have valid links), but otherwise everything appears normal. Certes (talk) 17:13, 3 December 2023 (UTC)[reply]
Alright, say you have a local directory /home/user/rosebud which is your application, and you want to mirror the contents of this directory on Toolforge in /data/project/rosebud for the tool named rosebud. The local rsync command is:
The --delete means delete any files in the Toolforge directory that are not also in the local directory, so the directories stay in sync. The --progress is optional it shows what files it copies/deletes. I really like this method for some tools. I can continue to post HTML pages on the web via Toolforge, but the freedom to run the application locally.
Hopefully backlinks settles down after a few runs. Maybe the greater distance over the network is causing some API requests to time out and thus data is missing? Not sure about Julius Ceaser if that keeps happening let me know. -- GreenC17:43, 3 December 2023 (UTC)[reply]
Hey you remember me,it's Harley Quinn on duty.
Im the one who requested the Bad Guy Patrol bot.
I liked the ideas you were spit bawling and I would like to team up with you.
What do you say.
I hope your online cause I'll leave in an hour
I believe that this individual is noteworthy with achievements in media appearances and the prolific writing of 18 books that are very popular. Wikipedia has pages of kids who have social media notoriety .
I believe this page is not being allowed due to religious and spiritual prejudice. Even the words used by the Wikipedia editor who deleted it such as calling Dolores Cannon a (quack) demonstrate a negative & emotional view of the subject.
I am going to take this as far as I must to get this page generated. Can you help me with this please? Holy4d (talk) 15:28, 7 December 2023 (UTC)[reply]
Hello. Thanks for warning. I won't use complex links again. But I didn't understand anything. Is it impossible for me to fix those long complex links? I mean, there is no other way than to paste the link of the download PDF page directly into the source before entering the PDF article. Is this true? Vartolu3566 (talk) 04:42, 9 December 2023 (UTC)[reply]
User:Vartolu3566: Did you read WP:AWSURL? It explains what to do. For example, enter the following search term into google.com:
"Muş-Bulanık'ta Demir Çağ Merkezleri" site:academia.edu
Note the "" around the title of the work, and the site:academia.edu right after it.
The first Google result is the work in question located at academia.edu - copy paste that URL into the |url= field. Any questions let me know. -- GreenC04:46, 9 December 2023 (UTC)[reply]
Hello. For example, can I set up a blog site to verify information about the geography of a lake and add it as a source to Wikipedia? And if I confirm the plants growing in the area with clear photos. Is this against Wikipedia rules? Vartolu3566 (talk) 08:26, 13 December 2023 (UTC)[reply]
Generally not see WP:UGC (User Generated Content) for the rules about blogs WP:BLOGS. I understand what you're saying, original research like you are suggesting is a great thing. Wikipedia is unfortunately a more limited site. The subject area, geography, flora, fauna, is not my expertise so I can't think of another solution where to post this information. You could ask the same question at Wikipedia:Teahouse and also ask "If not at Wikipedia or a blog, where would be a good place?" -- GreenC15:32, 13 December 2023 (UTC)[reply]
Hi, I removed all references to Price being American, per your concerns.
Also please note that per MOS:INFOBOXPURPOSE an infobox should "summarize (and not supplant) key facts that appear in the article." Not redundant to have nationality in the body and the infobox. Rift (talk) 21:12, 15 December 2023 (UTC)[reply]
BOZ (talk) is wishing you a MerryChristmas! This greeting (and season) promotes WikiLove and hopefully this note has made your day a little better. Spread the WikiLove by wishing another user a Merry Christmas, whether it be someone you have had disagreements with in the past, a good friend, or just some random person. Don't eat yellow snow!
Spread the holiday cheer by adding ((subst:User:Flaming/MC2008)) to their talk page with a friendly message.
I'm wishing you a Merry Christmas, because that is what I celebrate. Feel free to take a "Happy Holidays" or "Season's Greetings" if you prefer. :) BOZ (talk) 00:17, 23 December 2023 (UTC)[reply]
Hello GreenC, may you be surrounded by peace, success and happiness on this seasonal occasion. Spread the WikiLove by wishing another user a Merry Christmas and a Happy New Year, whether it be someone you have had disagreements with in the past, a good friend, or just some random person. Sending you heartfelt and warm greetings for Christmas and New Year 2024. Happy editing, GoingBatty (talk) 19:53, 24 December 2023 (UTC)[reply]
New year, new scripts. Welcome to the 23rd issue of the Wikipedia Scripts++ Newsletter, covering around 39% of our favorite new and updated user scripts since 24 December 2021. That’s right, we haven’t published in two years! Can you believe it? Did you miss us?
Got anything good? Tell us about your new, improved, old, or messed-up script here!
User:Alexander Davronov/HistoryHelper has now become stable with some bugfixes and features such as automatically highlighting potentially uncivil edit summaries and automatically pinging all the users selected.
To a lesser extent, the same goes for User:PrimeHunter/Search sort.js. I wish someone would integrate the sorts into the sort menu instead of adding 11 portlet links.
Aaron Liu: Watchlyst Greybar Unsin is a rewrite of Ais's Watchlist Notifier with modern APIs and several new features such as not displaying watchlist items marked as seen (hence the name), not bolding diffs of unseen watchlist elements which doesn’t work properly anyways, displaying the rendered edit summary, proper display of log and creation actions and more links.
Alexis Jazz: Factotum is a spiritual successor to reply-link with a host of extra features like section adding, link rewriting, regular expressions and more.
User:Aveaoz/AutoMobileRedirect: This script will automatically redirect MobileFrontend (en.m.wikipedia) to normal Wikipedia. Unlike existing scripts, this one will actually check if your browser is mobile or not through its secret agent string, so you can stay logged in on mobile! Hooray screen estate!
Deputy is a first-of-its-kind copyright cleanup toolkit. It overrides the interface for Wikipedia:Contributor copyright investigations for easy case processing. It also includes the functionality of the following (also new) scripts:
User:Elominius/gadget/diff arrow keys allows navigation between diffs with the arrow keys. It also has a version that requires holding Ctrl with the arrow key.
Frequently link to Wikipedia on your websites yet find generating CC-BY credits to be such a hassle? Say no more! User:Luke10.27/attribute will automatically do it for ya and copy the credit to yer clipboard.
User:MPGuy2824/MoveToDraft, a spiritual successor (i.e. fork) to Evad37's script, with a few bugs solved, and a host of extra features like check-boxes for choosing draftification reasons, multi-contributor notification, and appropriate warnings based on last edit time.
/CopyCodeBlock: one of the most important operations for any scripter and script-user is to copy and paste. This script adds a copy button in the top right of every code block (not to be confused with <code>) that will, well, copy it to your clipboard!
m:User:NguoiDungKhongDinhDanh/AceForLuaDebugConsole.js adds the Ace editor (a.k.a. the editor you see when editing JS, CSS and Lua on Wikimedia wikis) to the Lua debug console. "In my opinion, whoever designed it to be a plain <textarea> needs to seriously reconsider their decision."
GANReviewTool quickly and easily closes good article nominations.
ReviewStatus displays whether or not a mainspace page is marked as reviewed.
SpeciesHelper tries to add the correct speciesbox, category, taxonbar, and stub template to species articles.
User:Opencooper/svgReplace and Tol's fork replaces all rasterized SVGs with their original SVG codes for your loading pleasures. Tell us which one is better!
ArticleInfo displays page information at the top of the page, directly below the title.
/HeaderIcons takes away the Vector 2022 user dropdown and replaces it with all of the icons within, top level, right next to the Watchlist. One less click away! There's also an alternate version that uses text links instead of icons.