Template talk:Enemydescription1: Difference between revisions

From The Cataclysm: Dark Days Ahead Wiki
Jump to navigation Jump to search
(→‎Updates: new section)
Line 36: Line 36:


[[User:JayJayForce|JayJayForce]] ([[User talk:JayJayForce|talk]]) 05:37, 19 December 2014 (PST)
[[User:JayJayForce|JayJayForce]] ([[User talk:JayJayForce|talk]]) 05:37, 19 December 2014 (PST)
== Updates ==
I've added support for both experimental and stable versions. If a experimental version of the page doesn't exist, the link to the experimental allows anyone to create it with already preloaded code. The only input needed right now is to write the name of the monster.
Mediawiki allows passing variables via GET to the preload template (see [https://www.mediawiki.org/wiki/Manual:Parameters_to_index.php#Options_affecting_the_edit_form here]), but only for versions 1.23+ (current wiki version is 1.19x). That would allow to only use a preload template instead of the current two (one for experimental and one for stable). Not a big worry, anyway.
You can also see [[Sandbox:_EnemyTestVanilla|here]] that most existing pages wouldn't need any further tweaking to work with this new system (although it'd be desireable, to lower server load). The linked page is a simple copy/paste of the current [[turret]] page. The only change was the call to the template (enemydescription1 to enemydescription). That template would be updated to the new one, so that change wouldn't be needed in the final implementation. As with the rest of the tests, ignore the faulty or conflicting parameters, as those are caused by calls to non-updated templates and the current problem with array parsing.
--[[User:Kenoxite|Kenoxite]] ([[User talk:Kenoxite|talk]]) 02:08, 20 December 2014 (PST)

Revision as of 10:08, 20 December 2014

What's all this about

I'm testing the implementation of the (semi)automated procedures I talked about here.

For this to work, I'd need temporary access to the server, or someone else with access already willing to deal some php (which would be preferred as my php skills are more than rusty).

Php would be needed for several reasons:

  • The call in this test is done each time the enemy test page is loaded, pulling constantly from the github server, which is slow. Ideally, a php script would regularly (daily, hourly, etc, using CRON or activated manually) copy the github jsons to this server. This way the external data extension would read the data from the local versions, speeding things up a bit.
  • Apart from that, the external data extension, which is the backbone of all this, can't deal with multidimensional arrays in jsons (as it can be seen when trying to retriebe the special attacks) and, while it can deal with arrays, it still can't retrieve particular elements of an array (not that I'm aware of), and neither can the wiki itself. A php parser would be needed so multi and unidimensional arrays are split, and then the external data extension would read from that parsed version of the related json (enabling things like explaining what each flag means).

Not sure if I'll go on with this, but if fully implemented it'd allow the whole wiki (with some exceptions) to be up to date with almost no human intervention.

--Kenoxite (talk) 03:00, 19 December 2014 (PST)

I have server access and if there's a set pile of PHP, can get it up and hopefully running. Which of us is worse at PHP is an open debate though. KA101 (talk) 20:47, 19 December 2014 (PST)
Good to hear. I'll setup a local server and create the PHP scripts myself then. I will pass them to you once I'm done, so we can proceed with further testing in this server. I might eventually contact Sheco, although I don't think it'd be needed.
BTW, there's also an alternative way of proceeding with the parsing, which has some advantadges... and drawbacks. A new branch in github's main or a clone in a new account could be created in github with alrleady parsed versions of the jsons (parsing would be done manually, via Notepad++ regex or similar). We'd then backup those already parsed files to the wiki's server. That would only require a single php script that backups those files regularly, without needed and extra one for parsing (or extra code to the one used for back-up). That means that the control of the parsing would be in the hands of any wiki admin, not only on those with some PHP knowledge, and would be free for anyone to edit and update as needed. Drawbacks: parsing the experimentals manually should be done regularly (and manually), which would lead to desynchs between the game and the wiki, and it'd be extra work that I doubt anyone would welcome. Just an idea, anyway. --Kenoxite (talk) 01:58, 20 December 2014 (PST)

Feedback

Interesting, very interesting.

It has quite a few issues at the moment, but part of that is bad or outdated code in the wiki and part is from the way the JSONs were set up. All of that can be fixed or worked around. The irradiated wanderer shows the biggest not already mentioned problem in my opinion. There are four different versions of that monster each using the same name, the wiki in its current state can only handle the first. I had the same problem with splitting the Coyote's into two pages.

The only other two that I can identify (I don't know what happened with Shocker brute) is the currently wiki used zombie death function and the general opaqueness of this system. Both have easy solutions. The clothing drops can be handled by adding in a simple yes/no qualifier in the JSONs for wiki purposes and a page can be created to explain how the system works for new people if required.

Overall though, this is the first step in getting a properly up-to-date wiki. Ideally this could allow us one day to keep archived versions from all major releases plus an up-to-date experimental version, kind of what the DF wiki has. The enemy pages are some of the most bot-unfriendly of all the pages you could have chosen in my opinion, so I'm pretty impressed you managed to get this far.

How far away is this from usable for something easier though, like the items? They seem a lot easier to do this way in my opinion and it's something that could really benefit from quick updating. From what I've seen and heard though, the code is pretty bad in that section, so it may take a while to set everything up.

I'll keep doing the enemies manually since this isn't workable yet. Better to have something than nothing.

Thanks for all the great work.

JayJayForce (talk) 05:37, 19 December 2014 (PST)

Updates

I've added support for both experimental and stable versions. If a experimental version of the page doesn't exist, the link to the experimental allows anyone to create it with already preloaded code. The only input needed right now is to write the name of the monster.

Mediawiki allows passing variables via GET to the preload template (see here), but only for versions 1.23+ (current wiki version is 1.19x). That would allow to only use a preload template instead of the current two (one for experimental and one for stable). Not a big worry, anyway.

You can also see here that most existing pages wouldn't need any further tweaking to work with this new system (although it'd be desireable, to lower server load). The linked page is a simple copy/paste of the current turret page. The only change was the call to the template (enemydescription1 to enemydescription). That template would be updated to the new one, so that change wouldn't be needed in the final implementation. As with the rest of the tests, ignore the faulty or conflicting parameters, as those are caused by calls to non-updated templates and the current problem with array parsing.

--Kenoxite (talk) 02:08, 20 December 2014 (PST)