Beyond Binary Wikia

This is my wiki. (see Special:ListUsers)

|Create your own>

Top Categories:

Astrology‏‎, Neurodivergent People‏‎, USA‏‎, History‏‎, Physics‏‎, Autism Spectrum‏‎, Politics‏‎, Culture‏‎, Philosophy‏‎, Psychology‏‎, Celebrities‏‎, Aspects‏‎, Imperialism‏‎, Fiction‏‎, Quantum‏‎, Historical Astrology‏‎, Geography‏‎, Linguistics‏‎, Oppression‏‎, Black Culture‏‎, Gender‏‎, Modern History‏‎, Astronomy‏‎, Music‏‎, Colonialism‏‎, Capitalism‏‎, Numbers‏‎, Information Age‏‎, Transgender‏‎, Revolution‏‎, Biology‏‎, Celebrity Culture‏‎, Zodiac‏‎, Years‏‎, Neurodiversity‏‎, Religion‏‎, Countries‏‎, Mathematics‏‎, Christianity‏‎, Spirituality‏‎, Neurology‏‎, Cryptocurrency‏‎, Quantum Philosophy‏‎, Asteroids‏‎, War‏‎, UK‏‎, LGBT‏‎, Natal Astrology‏‎, Mental Health‏‎


"A template is a special type of page that is made so its content can be included in other pages. Since a given template can be included in many pages, it can help reduce duplication and promote a uniform style between pages." - |>


"Templates are useful for:
  • Creating content that should appear on many pages.
  • Formatting content or data (such as infoboxes) in a way that should be consistent across many pages.
  • Creating a shortcut to a frequently-visited page or for writing things that you repeat often when communicating with others.
  • Replacing long, complicated code so that a page is easier for other users to edit.
  • Protecting parts of a page from editing while leaving other sections open for edits.

| pre-existing templates>

"New Fandom communities come with pre-loaded default templates. You can view a complete list of templates available on your community by going to Special:AllPages and selecting "Template" from the namespace dropdown menu. Click "Go" and all available templates will appear in a list."

| to create a basic template>

"Templates can be very powerful, but also sometimes very complicated. It often helps to start by creating the simplest possible kind of template, and then experimenting from there.
  • On your community, navigate to "Template:Example" using the address bar of your browser and click "Create" which can be found in the top right corner of the content section. This will open source editor to create the template.
  • You should see a popup asking you to Choose template type; check one of the options. If none match to what you're are looking for, check "Unknown".
  • Type a couple of words or a sentence in the editor.
  • Click "Publish". You have just created a template with some sample content."

| parameters>

"Templates can have parameters. These allow you to alter the way the template is displayed, such as including specific text or altering the design.

To take Template:Wikipedia as an example, adding just [[wikipedia:en:{{{1}}}||Wikipedia:/en/{{{1}}}>]]

assumes that the page on Wikipedia is the same name as the page of the current page the template is used on. However, a parameter can be added to tell the template that the page on Wikipedia was Microsoft. In source edit mode, the code to add this parameter is |Wikipedia:/en/Microsoft>

, though in VisualEditor, the same is achieved by clicking on a template and editing the parameters."

| templates>

"A different way to use a template is to substitute its content into a page. This can only be done in source editor, and is done by inserting subst: immediately after the opening braces: {{subst:templatename}}. Once the page is saved, the link to the template is removed and the template output is substituted in its place and can be further edited. Any updates to the template will not affect the content that was substituted into the page."

e.g. - makes timelines easy

| can I find a complete list of all templates on my wiki, not just unused or uncategorized ones?>


Lua Templates

(see Lua#Lua for Mediawiki)


"Lua is a dramatically different coding experience than basic wikitext templates, and resembles a more 'traditional' programming syntax (Lua being based on C) and thus offers two key advantages. First and foremost is that logical functionality – if, else, and while statements along with arrays and variable definition for instance - is built in to the Lua language, making the implementation of basic logic much easier in Lua than the hacky way MediaWiki adds it through extra add-ons and parser functions. Secondly, because Lua is streamlined for logical operations, it is much more technically efficient."

| & Support>

"Lua is enabled by default on all FANDOM wikis. The general standard Lua libraries along with the specialized Scribunto libraries and InfoboxBuilder are available.
In addition to the documentation linked below, we have a forum board set up here on DEV Wiki to ask questions and get help."

| templating/Basics>

"Lua is implemented in MediaWiki wikis using the Scribunto/Lua extension and stored in resource pages using the Module: namespace."
"To create your first Lua script:
  1. Navigate to Module:Sandbox/Username, where Username is your Fandom username.
  2. It's a sandbox. Everyone is free to play in their sandbox.
  3. Clear all existing code. Add the following code and save the page:"
local p = {}
function p.hello()
    return 'Hello!'
return p

| templating/Basics#Understand your first Lua script>

"# local p = {} creates a local table or array for your code and names it p.
  1. function p.hello() adds a function named hello to the table. Functions can be invoked (called) by name from outside the module.
  2. return 'Hello!' returns the string Hello! when the function is invoked (called).
  3. end ends the function.
  4. return p returns the code table to whatever process loads this Lua module.

The code that runs the script includes:

  1. invoke: invokes (calls) a Lua module, and loads something
  2. Sandbox specifies the name of the module to be loaded.
  3. hello specifies the name of the function inside the module that is to be invoked (called)."
{{#invoke:Sandbox|hello}} Keyword 1st Parameter 2nd Parameter





What it does

specifies action - here load module and implement function

specifies the name of the module to be loaded

specifies the name of the function inside the module that is to be invoked (called).

| module repository>

Lua modules can also be loaded from the Open Source Library using require("Dev:ModuleName"), as opposed to require("Module:ModuleName"). These "global modules" are available for re-use FANDOM-wide and are described in more detail here."
"You could also check out the list of Lua Modules."

Optional Arguments

Possible? (in recipe wikibooks?, history)

with parser functions?
Sample A {{#if: {{{1}}} | Parameter 1 is not defined, or is defined and non-null/non-empty. | Parameter 1 is null. It contains only empty string(s) or breaking space(s) etc.}}
{{#ifeq:{{{v|}}}|{{{v|-}}}| v was defined (and may be empty) | v was not defined }}
let "v" = 1, 2, 3 or 4 (for the 1st, 2nd, 3rd or 4th argument), make it optional by performing a conditional comparision on it with an altereed version of itself. If the argument is not supplied than the comparison allows the template to ignore the protocol for that argument and behave as an 'n-1' argument function. (see Template:YT for an example with 3 required arguments and an optional 4th for the year of publication of a YouTube video being linked)



| is wikipedia pageid? how to change it into real page url?>

"The pageid is the MediaWiki's internal article ID. You can use the action API's info property to get the full URL from pageid:
anyway to go the reverse? getting pageid/title from any given url? – Vikas Prasad Nov 29 '17 at 8:46
@VikasPrasad: You can use the titles parameter instead: – Matěj G


Magic Words

  • "The special page Special:Redirect can be used to access pages via their page IDs. For example, Special:Redirect/page/10501, redirects to the present page.
  • index.php accepts the parameter curid to access pages via their page IDs. For example, /w/index.php?curid=10501 will load the present page" (from discussion: "PAGEID was added as a magic work in MediaWiki 1.20. Wikia uses a hacked version of MediaWiki 1.19. At Wikia, the only way I know to get page ID is from the API." e.g.

wikia = v1.19 port

"This replacement happened because {{{1}}} tells the wiki to pass the first parameter of the template here. This can be extended with {{{2}}}, {{{3}}}, etc."


Here are some of the magic words used most commonly at FANDOM:

  • __NOTOC__ hides the table of contents on a page.
  • __TOC__ places the table of contents exactly where this is entered. It overrides the NOTOC switch.
  • __NOWYSIWYG__ disables the classic editor on a page.
  • __HIDDENCAT__ makes a category hidden.
  • {{CURRENTDAYNAME}} produces the current day of the week.Monday
  • {{NUMBEROFARTICLES}} shows the number of articles on your community.2,308
  • {{SITENAME}} produces the name of the community.Beyond Binary Wikia
  • {{PAGENAME}} produces the name of the page the word is placed on.Wiki
  • {{FULLPAGENAME}} produces the entire name, i.e. with the namespace prefix, of the page it is placed on.Wiki

Page Numbering - How to get the 50,000th article created by time order via API?

Bawolff (talkcontribs)
"Not really (or at least not easily)
Best way is to probably use sql.
For all namespaces, do something like:
   SELECT page_namespace, page_title from page order by page_id limit 49999,1;"

"Uniquely identifying primary key. This value is preserved across edits and renames.
Page IDs do not change when pages are moved, but they do change when pages are deleted and then restored. As of MediaWiki 1.27, this value is still 'preserved' across deletions via an analogous field in the archive table (introduced in MediaWiki 1.11).
MediaWiki offers a number of relevant tools:
  • The page ID of any page (except special pages) can be looked up in the "page information" link from the Tools menu.
  • The magic word 3389 can be used to return the page id of a page.
  • The special page Special:Redirect can be used to access pages via their page IDs. For example, Special:Redirect/page/10501, redirects to the present page.
  • index.php accepts the parameter curid to access pages via their page IDs. For example, /w/index.php?curid=10501 will load the present page."

MediaWiki API

"The WikiPage class represents a MediaWiki page and its history. It encapsulates access to page information stored in the database, and allows access to properties such as page text (in Wikitext format), flags, etc.
In theory, WikiPage is supposed to be the model object for pages, while the Article class handles the business and presentation functions. If you have an Article object, you can get a WikiPage object using Article::getPage()."

"Robots or bots are automatic processes that interact with Wikipedia (and other Wikimedia projects) as though they were human editors. This page attempts to explain how to carry out the development of a bot for use on Wikimedia projects and much of this is transferable to other wikis based on MediaWiki. The explanation is geared mainly towards those who have some prior programming experience, but are unsure of how to apply this knowledge to creating a Wikipedia bot."

"The below Python code snippet makes use of Wikimedia API for hierarchically crawling a given Wikipedia category. The output of code stores the links and text content of wiki documents in separate directories."

Multi-Category Search


   "Is there a way to design a search feature (template) that can search for pages that are in 2+ categories?
   Through google I've seen a few a few things that allude to it involving mediawiki
   ...but was unsure on how it can be set up with a wikia site." -Yami Michael 12:38, April 11, 2012 (UTC)


   "This can be done with a special page called "Category Intersection" - you can try it out here -
   and You can contact FANDOM Staff through the contact form on Special:Contact.
   if you want it enabled on your wiki -- RandomTime 13:21, April 11, 2012 (UTC)"

   "There is an unsupported/unimplemented API for this - you can try here
   (it takes only two categories, and is not entirely intuitive):"


"We currently support the following APIs:

  • Content API - Provides access to articles, search and media files.
  • MediaWiki API - Since FANDOM wikis are built on top of MediaWiki, the MediaWiki API can be used to access most of the data on our websites.
  • LyricWiki API - Provides access to LyricWiki."

Files -

Links =

Repeat Citations = Help:MediaWiki | Community Central | FANDOM powered by Wikia

"Originally created for Wikipedia, MediaWiki is an open source PHP-based wiki engine now ... FANDOM communities are running on MediaWiki version 1.19.24."


"EditSimilar is an extension that suggests similar articles that need attention upon save. It was originally written by Bartek Łapiński and Łukasz Garczewski for Wikia.
It finds and suggests another article that may need attention or improvement when a user saves an edit. It is designed to help users decide what to do next, and boost the number of edits on a wiki."


Fandom's source code: Configurable Discord bot for linking wiki articles from any Wikia-based community A collection of JavaScript snippets for use on Wikia, mainly with the Gadgets extension

Dynamic Page Lists

(see also Category:DPL)

       category = Characters

"category=1st category name|2nd category name|3rd category name|...
category=1st category name&2nd category name&3rd category name&...
You can either use the pipe symbol for logical OR or you can use the & symbol for logical AND. Mixing both is not possible! If you specify more than one category line their arguments will be implicitly connected with AND. Thus you can build a logical expression where you have several AND terms with each term consisting of an OR group of categories.
Attention: the category command uses the pipe symbol to delimit its arguments (logical OR). When using DPL with { {parser function} } syntax you MUST escape the pipe character by a template (which is typically called "!") or you must use the broken pipe symbol ("¦"):"

categorymatch Select articles based on categories. You can specify one or more patterns (SQL LIKE); a page will be selected if at least one of its categories matches at least one of the patterns.
categorymatch=1st category pattern|..
A "%" is used to denote "any number of any characters".
Example 1:
This list will output pages that belong to categories like Africa, Africans, Europe, Europeans etc."

"notcategory=category name"

"categoriesminmax=[min],[max]" (number of categories)

"notnamespace=namespace name"

   {{#dpl:title=My Page|include=#First Chapter}}
"will include the contents of "My Chapter" of an article named "My Page" in the main namespace."

"This will output all pages (regardless of namespace) which have a name that contains "foo" somewhere in the title or start with "bar""
" to make it case-insensitive, use the parameter ignorecase."
"This will output all pages (regardless of namespace) which do not contain an "e" or a "u" in their title."


"titleregexp Select pages with a title matching the specified regular expressions. The pattern will be used as a REGEXP argument in a SQL query. Namespaces are ignored as the namespace= parameter can be used to further narrow the selection.
titleregexp=regular expression
   OR {{dpl:titleregexp=[0-9]+.*y$}}
This will output all pages (regardless of namespace) which have a digit in their name and end with a "y". Use the parameter ignorecase to make the comparison case-insensitive."

nottitlematch Select pages with a title NOT matching any of the specified patterns. The patterns are used as a LIKE argument in a SQL query. Namespaces are ignored as the namespace= parameter can be used to further narrow the selection. Normally you would want to use this selection only in combination with other criteria. Otherwise output could become huge.


This will output all pages (regardless of namespace) which do not contain an "e" or a "u" in their title.

| namespace          =
| category           = Events
| count              = {%DPL_count:50%}
| resultsheader      = ²{Extension DPL continue¦dir=%SCROLLDIR%¦pages=%PAGES%¦total=%TOTALPAGES%¦firsttitle=%FIRSTTITLE%¦lasttitle=%LASTTITLE%¦page={{FULLPAGENAME}}}²\n
| resultsfooter      = ²{Extension DPL continue¦dir=%SCROLLDIR%¦pages=%PAGES%¦total=%TOTALPAGES%¦firsttitle=%FIRSTTITLE%¦lasttitle=%LASTTITLE%¦page={{FULLPAGENAME}}}² %DPLTIME%\n
| scroll             = yes
| columns            = 3

[DPL 1]


This list will output three random articles from the group of the 20 largest articles on Africa.

Inline DPL Input

  • {{#dpl: category = Policy | namespace= {{NAMESPACE}} }}
    "This list will output pages that are in the namespace the current page is in - whatever it is - and belong to [[Category:Policy]]."
  • {{#dpl:category=Africa¦Europe|category=Politics and conflicts}}/nowiki> *:"<nowiki>This list will output pages that have [[Category:Africa]] OR [[Category:Europe]], AND [[Category:Politics and conflicts]] listed."

Inline DPL Output

  • inlinetext
  • "To define the inline text used in mode=inline"

<code><DPL> category=Africa mode=inline inlinetext=   •   </DPL></code>

"This list will output pages that have [[Category:Africa]] shown like Item1 • Item2 • Item3 • ..."

Random Inline Output

   {{#dpl:category=Pluto in Leo|mode=inline|randomcount=3}}


Map Editor

Special:Map Editor


jQuery.makeCollapsible (mw-collapsible) fails on Mobile


Admin Operations

Database Download (Special:Statistics)

Special Pages







Special:MostRevisions (most edits, most edited)




Externally Hosted Images

Help:Externally hosted images

Deleting Edit History - you can't, unless you delete the page and then re-create one.

Wiki Automation (Bots)



| to rename categories>

"Categories cannot be renamed, but there is a way to bypass the manual changing of all pages. Using AutoWikiBrowser, one can make a list of all pages in the category to be removed and then have the software change it to the new one in a semi–automated manner."

| Talk:AutoWikiBrowser>

"I'm new to talk:AutoWikiBrowser (AWB), I'm attempting to do a simple change from 'Wakefield Trinty Wildcats|Wakefield Trinity' to 'Wakefield Trinity' for articles in the Category:Wakefield Trinity Wildcats players. Within AWB I've entered the category, and pressed 'Make List', I've checked the 'Find and replace' box, and then entered the text in the 'Find' and 'Replace within either and/or both 'Normal settings, and the 'Advanced Settings', and then pressed 'Start', AWB then brings up the article, but it doesn't appear to 'automatically make any changes and then go to the diff(erence)'. However, I can manually make changes in the 'Edit box' and save the changes. How I get AWB to make the changes automatically?"

∣YouTube:/2014(Lingo's AWB videos)/autowikibrowser example task>

∣YouTube:/2016(ReVon Balzac)/How-to AutoWikiBrowser (AWB)>

Javascript Wiki Browser


"Javascript Wiki Browser (abbr. JWB) is a script that allows users to make semi-automated edits more easily. For general use, it works similarly to the downloadable AutoWikiBrowser, but it requires no executable installation, and can run on any (major) operating system."

p== Bots ==

"Bots can automate tasks and perform them much faster than humans. If you have a simple task that you need to perform lots of times (an example might be to add a template to all pages in a category with 1000 pages), then this is a task better suited to a bot than a human."

"There are a number of semi-bots available to anyone. Most of these take the form of enhanced web browsers with MediaWiki-specific functionality. The most popular of these is AutoWikiBrowser (AWB), a browser specifically designed to assist with editing on Wikipedia and other Wikimedia projects. A complete list of Wikipedia semi-bots can be found at w:Wikipedia:Tools/Editing tools. Semi-bots, such as AWB, can often be operated with little or no understanding of programming.

If you decide you need a bot of your own due to the frequency or novelty of your requirements, you don't need to write one from scratch. Many bots publish their source code, which can sometimes be reused with little additional development time. There are also a number of standard bot frameworks available for download. These frameworks comprise the vast majority of a bot's code. Since these bot frameworks are in common usage and the complex coding has been done by others and has been heavily tested, it is far easier to get bots based on these frameworks approved for use. The most popular and common of these frameworks is Pywikibot (PWB), a bot framework written in Python, which is well documented and tested and for which, in addition to the framework, many standardized scripts (bot instructions) are available. Other examples of bot frameworks can be found below."

Macros for Web Browser

"Open source record and playback test automation for the web" - Kantu Selenium IDE (Open-Source)

"Kantu is a modern Selenium IDE plus tons of additional features, open-source"
  • File Access XModule - read and write directly to your hard drive (Kantu now has two kinds of storage modes for macros and data)
  • - download xmodule package

"Modern Open-Source Web Macro Recorder and Selenium IDE. Use it for general web automation, web testing, form filling & web scraping.
Kantu is the most popular open-source web macro recorder. If there’s an activity you have to do repeatedly, just create a web macro for it. The next time you need to do it, the entire macro will run at the click of a button and do the work for you."

WikEd - doesn't work without Monobook - Monobook is gone now.


"A good way of testing your bot as you are developing is to have it show the changes (if any) it would have made to a page, rather than actually editing the live wiki. Some bot frameworks (such as pywikibot) have pre-coded methods for showing diffs."

  • Set a custom User-Agent header for your bot (per the Wikimedia User-Agent policy, if your bot will be operating on Wikimedia wikis).
  • Use the maxlag parameter with a maximum lag of 5 seconds. This will enable the bot to run quickly when server load is low, and throttle the bot when server load is high.
    • If writing a bot in a framework that does not support maxlag, limit the total requests (read and write requests together) to no more than 10/minute.
  • Use the API whenever possible, and set the query limits to the largest values that the server permits, to minimize the total number of requests that must be made.
  • Edit (write) requests are more expensive in server time than read requests. Be edit-light and design your code to keep edits to a minimum.
    • Try to consolidate edits. One single large edit is better than 10 smaller ones.
  • Enable HTTP persistent connections and compression in your HTTP client library, if possible.
  • Do not make multi-threaded requests. Wait for one server request to complete before beginning another
  • Back off upon receiving errors from the server. Errors such as squid timeouts are often an indication of heavy server load. Use a sequence of increasingly longer delays between repeated requests.
  • Make use of the Assert Edit extension, an extension explicitly designed for bots to check certain conditions, which is enabled on Wikipedia.
  • Test your code thoroughly before making large automated runs. Individually examine all edits on trial runs to verify they are perfect.

"If your bot is doing anything that requires judgment or evaluation of context (e.g., correcting spelling) then you should consider making your bot manually-assisted, which means that a human verifies all edits before they are saved. This significantly reduces the bot's speed, but it also significantly reduces errors."

"Python is a popular interpreted language with object-oriented features.

Getting started with Python
  • [[1]]–The most used Python bot framework.
  • wikitools—A lightweight bot framework that uses the MediaWiki API exclusively for getting data and editing, used and maintained by [[2]] (downloads)
  • mwclient—An API-based framework maintained by Bryan
  • mwparserfromhell - A Python parser for MediaWiki text, maintained by The Earwig"

Functions and Scripts


| Pywikibot.Page>

"extlinks(total=None, step=NotImplemented)
Iterate all external URLs (not interwiki links) from this page.
  • Parameters: total – iterate no more than this number of pages in total
  • Returns: a generator that yields unicode objects containing URLs.
  • Return type: generator"
  • Get the first revision of the page.
  • DEPRECATED: Use Page.oldest_revision.
  • Return type: tuple(username, Timestamp)"
  • Return the first revision of this page.
  • Return type: Revision"
  • Return pageid of the page.
  • Returns: pageid or 0 if page does not exist
  • Return type: int"


"class, timestamp, user, anon=False, comment="", text=None, minor=False, rollbacktoken=None, parentid=None, contentmodel=None, sha1=None)
  • revid (int) – Revision id number
  • text (str, or None if text not yet retrieved) – Revision wikitext.
  • timestamp (pywikibot.Timestamp) – Revision time stamp
  • user (str) – user who edited this revision"

| Pywikibot.Pagegenerators>

"A page generator is an object that is iterable (see ) and that yields page objects on which other scripts can then work. cannot be run as script. For testing purposes can be used instead, to print page titles to standard output."
"-links Work on all pages that are linked from a certain page. Argument can also be given as “-links:linkingpagetitle”.
-liverecentchanges Work on pages from the live recent changes feed. If used as -liverecentchanges:x, work on x recent changes.
-imagesused Work on all images that contained on a certain page. Can also be given as “-imagesused:linkingpagetitle”.
-newimages Work on the most recent new images. If given as -newimages:x, will work on x newest images.
-newpages Work on the most recent new pages. If given as -newpages:x, will work on x newest pages."
"-start Specifies that the robot should go alphabetically through all pages on the home wiki, starting at the named page. Argument can also be given as “-start:pagetitle”.
You can also include a namespace. For example, “-start:Template:!” will make the bot work on all pages in the template namespace.
default value is start:!"
"-page Work on a single page. Argument can also be given as “-page:pagetitle”, and supplied multiple times for multiple pages.
-pageid Work on a single pageid. Argument can also be given as “-pageid:pageid1,pageid2,.” or “-pageid:’pageid1|pageid2|..’” and supplied multiple times for multiple pages."
"-catfilter Filter the page generator to only yield pages in the specified category. See -cat generator for argument format.
"-namespaces Filter the page generator to only yield pages in the
-namespace specified namespaces. Separate multiple namespace
-ns numbers or names with commas.
-ns:0,2,4 -ns:Help,MediaWiki
You may use a preleading “not” to exclude the namespace.
-ns:not:2,3 -ns:not:Help,File
If used with -newpages/-random/-randomredirect/linter generators, -namespace/ns must be provided before -newpages/-random/-randomredirect/linter. If used with -recentchanges generator, efficiency is improved if -namespace is provided before -recentchanges.
If used with -start generator, -namespace/ns shall contain only one value."
  • pywikibot.pagegenerators.AncientPagesPageGenerator(total=100, site=None, number='[deprecated name of total]', repeat=NotImplemented)
Ancient page generator.
total (int) – Maximum number of pages to retrieve in total
site ( – Site for generator results.
  • "pywikibot.pagegenerators.DayPageGenerator(start_month=1, end_month=12, site=None, year=2000, endMonth='[deprecated name of end_month]', startMonth='[deprecated name of start_month]')
Day page generator.
site ( – Site for generator results.
year (int) – considering leap year."






|> (Python download)


Ruby Mediawiki Api

| Gamepedia/Mediawiki Butt Ruby>

"A Ruby library for the MediaWiki API."
"Pretty much every API action in core MediaWiki is possible through a helper instance method. However, for things that are not supported, there is the post method, which submits a POST request (which works fine for things that require a GET request in the API), that takes a hash parameter to pass to the API. Through this helper method, any API can be accessed!"
"To get the text of the main page:
require 'mediawiki/butt'
wiki ='')
wiki.login(username, password)
main_page_text = wiki.get_text('Main Page')

| Page Instance Method>

So,, text, opts = {}) would then create a page on the object 'wiki'?

| Ruby Api>

"A library for interacting with MediaWiki API from Ruby. Uses adapter-agnostic Faraday gem to talk to the API."
"Usage: Assuming you have MediaWiki installed via MediaWiki-Vagrant.
require "mediawiki_api"

client = ""
client.log_in "username", "password" # default Vagrant username and password are "Admin", "vagrant"
client.create_account "username", "password" # will not work on wikis that require CAPTCHA, like Wikipedia
client.create_page "title", "content"
client.get_wikitext "title"
client.protect_page "title", "reason", "protections" #  protections are optional, default is "edit=sysop|move=sysop"
client.delete_page "title", "reason"
client.upload_image "filename", "path", "comment", "ignorewarnings"
client.watch_page "title"
client.unwatch_page "title"
client.meta :siteinfo, siprop: "extensions"
client.prop :info, titles: "Some page"
client.query titles: ["Some page", "Some other page"]



| The Beast Wiki:MediaWiki-Butt>

"MediaWiki-Butt or MediaWiki::Butt is a Ruby interface to the MediaWiki API written by Eli Foster (SatanicSanta) and Eric Schneider (Xbony2) of the Official Feed The Beast Wiki. This interface utilizes the HTTPClient Ruby gem written by Hiroshi Nakamura to quickly access the MediaWiki web API. It provides methods to get query information, edit content, perform administrative tasks, and authenticate the user."

Ruby Slack Bot

| a Slack Bot To Interact With Your Wiki/>

"We need an API key from Slack so we can interact with the bot in chat. Visit the bot service page while you are logged in your slack team. The first step is to set a username for our bot:"
"After clicking the “Add bot integration” button, the next step is to add information regarding our bot. We can set a name, upload an image, and set some other properties. Also, the needed API key is on this page:"
"You will need to have the Heroku tools installed:"

Google Apps Script

"Google Apps Script is a cloud based scripting language for light-weight application development in the Google Apps platform. Without any type of installation, this scripting language helps one to create BOT Example:

  • NeechalBOT source code Wikipedia Bot using Google apps script"

1807 Scripts

If you have a twin who is born even 40 minutes after you, they were born almost 1500km away from you from the perspective of the moon (or any other distant object). It is only in your local, rotating, frame of reference that the birthing parent assumes that both children are born in the exact same location. Humans have learned to associate visual material associations with a physical reality, and then presume that a repeat of that same visual association corresponds to a matching physical reality. However, the reality is that any moment you experience corresponds to a unique moment in time when the point in space that corresponds to your physical body also corresponded to the point in the universe that was occupied by your body at that moment. In order to keep your body in that location again the next day, you would have to have moved 1/30 * 3175km (1 degree/360 ~ 1 day /365) = 100 km If you want to wake up in the same spot two days in a row, you have to travel 100 km a day. (near the equator) ( =1 degree) In order to stay in the same spot over the course of a day you need to travel 3000 km every 2 hours. ( = 30 degrees) The circumference of the Earth is pi*diameter where the diameter near the equator is 12,700 km. (127*100km) 3.1415*12700 = 39,900 (399*100km) So in a full day, we all travel 399 degrees worth (which is incorrect because I rounded)

Exporting Wiki

(see also (Category:2010#Migrating this Wiki)

| All The Files Of a Wiki>

"Exporting all the files of a wiki can be done in a few different ways. If you have FTP access to the wiki, then you can move the files by following the procedure at Manual:Moving a wiki. If you lack such access, as can happen for instance if a wiki is abandoned by its site owner, then you will probably need to use workarounds. This procedure can semi-automate the task of downloading all the files, but you will still have to figure out a way to upload them to your wiki."

| To Download All Image Files In a Wikimedia Commons Page Or Directory>


" is a Pywikibot script that copies images from one wiki to another wiki. To transfer image to Wikimedia Commons use"


"RubyWikiDownloader (RWD) is a Ruby script that allows you to download images from one wiki and, if needed, upload them to your/another wiki."

|> - Tutorial for Downloading an archive of entire wiki



"Downloading all of the images from one wiki is possible using WikiTeam's Wiki Archiving scripts.
It runs on Phython scripting and you can not only download all the images but generate a wiki dump with it. However, to upload the files to the other wiki, you would have to manually upload them. I recommend using wikia's multiuploader to do so."

Mediawiki input filler (Migration Toolkit)

|> Filter stream extension to parse a MediaWiki XML packageDEVELOPED BY Thomas Mortagne

"The Filter take in input a XML file as defined in
The XML file contain the wiki pages but MediaWiki also store files on the filesystem. You can directly use the folder in which MediaWiki store its files.
Import a MediaWiki export in an XWiki instance
Install Filter Streams Converter Application
Install MediaWiki XML
After you have installed the two extensions, click on the Filter Stream Converter entry from the Applications panel


Choose the "XWiki instance output stream (xwiki+instance)" output type
Choose the "MediaWiki XML input stream (mediawiki+xml)" input type
Indicate in the "Source" field the XML file you exported
Indicate in the "Files" field the folder containing the MediaWiki files
After you have completed these steps, click the "Convert" button. After that, you will see the conversion progress."

Migration toolkit by Martin W. Kirst ("The recommended way to import MediaWiki content is now through the MediaWiki input filter.")

Importing from XML

| Format>

| XML Dumps>

"The Special:Export page of any MediaWiki site, including any Wikimedia site and Wikipedia, creates an XML file (content dump). "
"Special:Import can be used by wiki users with import permission (by default this is users in the sysop group) to import a small number of pages (about 100 should be safe). Trying to import large dumps this way may result in timeouts or connection failures."


"I accidently imported the wrong XML file using Special:Import. Is there a way to undo this? If not, how do I delete all pages in a category? I really don't get any of this Javascript stuff..."
|BlackZetsu>: "AjaxBatchDelete"

| An Abandoned Mediawiki Site>

|u:/tgr> "You can use the API to export all the text content, with something like action=query&generator=allpages&export. Files you'll have to scrape via some script, such as pywikibot. You can see what extensions are installed via Special:Version if you want to set up an identical wiki; some of the configuration settings are available via the siteinfo API, most you'll have to guess. There is no way to bulk clone user accounts, but you can use the MediaWikiAuth extension to transfer them when they log in."



"SharePoint is a web-based collaborative platform that integrates with Microsoft Office. Launched in 2001,[3] SharePoint is primarily sold as a document management and storage system, but the product is highly configurable and usage varies substantially among organizations.
Microsoft states that SharePoint has 190 million users across 200,000 customer organizations."


"A SharePoint farm is a logical grouping of SharePoint servers that share common resources.[26] A farm typically operates stand-alone, but can also subscribe to functions from another farm, or provide functions to another farm. Each farm has its own central configuration database, which is managed through either a PowerShell interface, or a Central Administration website (which relies partly on PowerShell's infrastructure)."

|, Standards And Integration>

"SharePoint uses Microsoft's OpenXML document standard for integration with Microsoft Office. Document metadata is also stored using this format."

| From Mediawiki To Sharepoint Wiki?Forum=Sharepointgeneralprevious> Is there a way with in SharePoint to migrate mediawiki pages to SharePoint wiki pages?

"If you can afford to buy a custom toolm go for MetaLogix migration tool . This has the capability to migrate MediaWiki contents
| Manager For SharePoint>
Otherwise, you can create your own tool. You can export the contents of MediaWiki pages into static HTML using the following
Then you can read the HTML and programmaitcally create SharePoint Wiki Pages on the fly using SharePoint 2010 object model. This is technically feasible"

|U:/Cornelius J. van Dyk>

using (SPSite site = new SPSite("http://sharepoint"))

     SPWeb rootWeb = site.RootWeb;
     SPList wiki = rootWeb.Lists["MyWiki"];
     SPFolder rootFolder = wiki.RootFolder;
     SPFile wikiPage = rootFolder.Files.Add(String.Format("{0}/{1}", rootFolder, "My Wiki Page.aspx"), SPTemplateFileType.WikiPage);
     SPListItem wikiItem = wikiPage.Item;
     wikiItem[SPBuiltInFieldId.WikiField] = "My Wiki Page with wiki link";

Please refer to this great article by fellow MVP Waldek Mastykarz for the detail:

| Creating Wiki Pages/>

SharePoint migration: Wiki - ShareGate You will often hear me talking about SharePoint Wikis. I am a big fan of collaboration by publishing and wikis. What does that mean? Well instead of creating ...

| Sharepoint Desktop App> - using Sharepoint on a PC, allowing access from file explorer (rather than browser)

Producing HTML or XML Dumps


| Translation>

"The Content Translation tool allows editors to create translations right next to the original article and automates the boring steps: copying text across browser tabs, looking for corresponding links and categories, etc. By providing a more fluent experience, translators can spend their time creating high-quality content that reads naturally in their language."

World Wide Web (WWW.)

| Wide Web>

"The World Wide Web (WWW), commonly known as the Web, is an information space where documents and other web resources are identified by Uniform Resource Locators (URLs, such as, which may be interlinked by hypertext, and are accessible over the Internet. The resources of the WWW may be accessed by users by a software application called a web browser.

English scientist Tim Berners-Lee invented the World Wide Web in 1989. He wrote the first web browser in 1990 while employed at CERN near Geneva, Switzerland. The browser was released outside CERN in 1991, first to other research institutions starting in January 1991 and then to the general public in August 1991. The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet.

Web resources may be any type of downloaded media, but web pages are hypertext media that have been formatted in Hypertext Markup Language (HTML). Such formatting allows for embedded hyperlinks that contain URLs and permit users to navigate to other web resources. In addition to text, web pages may contain images, video, audio, and software components that are rendered in the user's web browser as coherent pages of multimedia content.
Multiple web resources with a common theme, a common domain name, or both, make up a website. Websites are stored in computers that are running a program called a web server that responds to requests made over the Internet from web browsers running on a user's computer. Website content can be largely provided by a publisher, or interactively where users contribute content or the content depends upon the users or their actions."
In short, yes and no. Lua is unicode-agnostic and lua-strings are counted, so whenever you can treat unicode strings as simple byte sequences, you are done. Whenever that does not suffice, there are extension modules supplying your needs. You just have to figure out what exactly you mean by "support unicode" and use the proper abstraction from the right module. Unicode is extremely complex.
Some of the issues are:
  • Can I store, retrieve and concatenate Unicode strings?
  • Can my Lua programs be written in Unicode?
  • Can I compare Unicode strings for equality?
  • Sorting strings.
  • Pattern matching.
  • Can I determine the length of a Unicode string (byte, codeunit, codepoint, grapheme or printing using the only proper font?)?

Support for bracket matching, bidirectional printing, arbitrary composition of characters, and other issues that arise in high quality typesetting."

| Scripts>

"The maintenance scripts are used to perform various administrative, import, maintenance, reporting and upgrade tasks. The scripts are written in PHP and live in the maintenance subdirectory of MediaWiki installs.
There are dozens of scripts with varying degrees of general utility and quality. You should carefully read the documentation on a script before using it; if a script isn't documented, take additional care running it."


Personal Java Script "a few, specific user pages, as well as one account preference.

CSS To customize how the site looks for you using CSS, create and edit Special:MyPage/global.css on Community Central. This will apply the changes wherever you go on FANDOM.

If you want to apply personal CSS on just one community, visit Special:MyPage/common.css on that community.

JavaScript You must manually enable personal JS in your preferences before you'll be able to see the effects of changing your personal JS. Please understand the implications of turning the feature on by reading all the notes below. The option can be found on the Under the Hood tab, under "Advanced display options". PersonalJSPreference The personal JS preference

To customize how site looks for you using JS, create and edit Special:MyPage/global.js on Community Central. This will apply the changes wherever you go on FANDOM.

If you want to apply personal JS on just one community, visit Special:MyPage/common.js on that community.

Notes Before enabling personal JS on your account for the first time, please double-check any existing personal JS you have and make sure you are happy with it. JS errors can break basic functionality - be careful! (But there may be an easy way out!) Please avoid including JS that you do not understand and don't import from sources that you do not fully trust or that are not secure. FANDOM cannot be held responsible for any issues that occur as a result from the use of personal JS. It is your responsibility to maintain your personal JS (and CSS). Note that "personal JS" pages are currently considered to be: global.js, common.js, wikia.js, chat.js, and uncyclopedia.js."

Site-Wide Java Script