User talk:Flargen

From TheKolWiki
Jump to: navigation, search

Past discussion are archived here: 1 2

Template:Item

Since you can access this page, can you add Size? Also a way to move the alsocombat, since the pulled taffies have it in a different spot (specifically the red one)? And potion duration shouldn't say "1 Adventures". — Cool12309 (talk) 00:08, 11 April 2013 (UTC)

  • I'm looking into these. I've been away from the game for a long time; and the wiki, for the most part, aside from a few spam deletions/bans. The 1 Adventures thing should now be fixed. --Flargen (talk) 00:10, 11 April 2013 (UTC)

File:vio_buckle.gif

Delete all revisions except mine. It has been animated since when I uploaded it, it's just cache. — Cool12309 (talk) 02:34, 4 May 2013 (UTC)

  • Funny thing is, you can't "just" delete the current revision. Trying to do that deletes all of them. Allegedly. I remember now image reverting always being weird. Perhaps I should just revert on myself again. --Flargen (talk) 02:39, 4 May 2013 (UTC)
    • Apparently even completely deleting it and uploading the animated version still results in the non-animated version. The hell? I give up. Let's find Quietust, he probably knows how to do this right. --Flargen (talk) 16:21, 4 May 2013 (UTC)
      • It's cache. Reload with Ctrl + R or Ctrl + F5. — Cool12309 (talk) 17:04, 4 May 2013 (UTC)

Template:Location/meta2

Needs updating to include "terrain" and "terrain2" (in same spot as terrain is on this template) — Cool12309 (talk) 15:41, 5 May 2013 (UTC)

  • Done. Never even knew that template existed... --Flargen (talk) 18:09, 10 May 2013 (UTC)

Item Drop Rates

How do you calculate it (the data tables you post on the talk pages)? And for that matter, the error. (Also what exactly IS the error?) — Cool12309 (talk) 23:05, 24 May 2013 (UTC)

  • Read this. Each particular item drop bonus just gives me a binomial random variable with success probability Ip, where p is the base probability desired and I is the item drop multiplier (I=1 for +0% items, I=4 for +300% items, etc.). As Ip approaches 1 (from below), the variance (which is approximated from the observed value of Ip) on the associated measurement of p goes down to 0 (indeed, as long as Ip<1, then a higher I is always better, as it yields a lower variance, and thus faster convergence). I weight each observation at a given item drop bonus by the inverse of the variance, and combine them. I used to just weight them by the number of observations, but this put undue emphasis on less precise data (from lower item drop bonuses, generally), and required some mucking about with spreadsheets when item drops were maxed out at certain values, or were not valid (no shirts will drop when a character doesn't have torso awaregness). Some of my older drop data is still listed with this old weighting. Zones I have previously worked in, and have been posting new data to, use the inverse variance weighting. I then effectively use a normal approximation (which is highly accurate with this many observations) to determine when there is a unique integer rate in a 95% CI (1.96 standard errors to either side). It is possible to invoke fewer approximations by using some Bayesian analysis, but I find that to be a pain in the ass. A spreadsheet does my way with zero problems and little set-up, and the magnitude of the errors involved in the approximations is tiny; Bayesian analysis is substantially more involved. Starwed once wrote a web tool for it, but it's since disappeared from the web, and he has disappeared from the game. It was really convenient for spading multi-drops (like the three white pixels on a blooper, say), as that's a pain in the ass no matter what method you use. --Flargen (talk) 23:41, 24 May 2013 (UTC)
    • This incredibly confused me. Big words. Lots of them. Can you give some examples of arriving to a drop rate, and the error? — Cool12309 (talk) 00:32, 25 May 2013 (UTC)
      • You run +230% items, and in 200 encounters you see 191 of a particular item drop. That's an observed rate of 191/200=.955. So your observation for the value of Ip is .955. Dividing by I, which in this case is 3.3, you estimate p = .28939 (just under 29%). The standard error in your observation of the value pI is equal to sqrt(pI(1-pI)/200) = .01466. Since you estimated p by dividing the observation of pI by the (known) constant I, the error in measuring p from your data is the error we just computed divided by I: .01466/3.3 = .00444 (a bit under .5%). We take our estimate of p, and add and subtract from it 1.96 times this error. Adding we get .2981; subtracting we get .2807. In the interval (.2807, .2981)—equivalently (28.07%, 29.81%)— there is a unique integer percentage: .29, or 29%. We thus declare the value of p to be .29, with (at least) 95% confidence (meaning there is at most a 5% chance that the true value is outside of our interval, given our data). --Flargen (talk) 01:18, 25 May 2013 (UTC)
        • Ah, I get it now. So if we had multiple different item values, we would just average all of them after following this? And would the error be averaged too? — Cool12309 (talk) 02:06, 25 May 2013 (UTC)
          • We would use the inverse of the square of the standard errors to weight them. A small standard error should/will correspond to more precise data, and so should be given an importance corresponding to that. Ever had a class that had something like: 25% Exam 1, 25% Exam 2, 20% Homework, 30% Final exam? Those percentiles would be weights of .25, .25, .2, and .3 respectively. In this particular case the weights sum to 1, but that's not required, and would not be what happens in what I described for drop rate spading (you could always arrange it without changing anything; whether you do so is a matter of easy comprehension, such as a grading breakdown, and ease of implementation). For the example given above, the weighting would be (1/.00444)^2 = 50726.4. Some data gathered at a lower drop rate bonus might have an error of, say, .0132, which would give a weight of (1/.0132)^2 = 5739.2. The first set of data has a smaller error, so is more precise, and is weighted more heavily (around 9 times as much, seeing as its error is about a 1/3 of the lower rate's error). In the old way I did things I would have weighted by number of observations. If I did 1000 turns to get the .0132 error observation, then it would have dominated my estimate (it had 1000/1200=5/6 of share of the pie), despite the fact that the higher error should give us less trust that it is accurate and reliable. --Flargen (talk) 02:47, 25 May 2013 (UTC)
          • The error isn't exactly averaged, but perhaps we can stick with what we have here. You can find details in the link I originally provided. Terms like "sample variance" would be equivalent to saying "the square of the standard error". --Flargen (talk) 02:52, 25 May 2013 (UTC)

Familiar Names

You said that it's easier to just use fname parameter. Well, what about the emilio? Or just about any familiar with a special name for that matter? — Cool12309 (talk) 16:37, 1 June 2013 (UTC)

  • It would also be easier to use that. Fname was added so we didn't have to go into a protected template every time a non-standard naming happens on a hatchling, and continue to expand a switch statement to handle every possibility. The pre-existing hardcoded ones were left because I am lazy, and didn't feel like changing the pages that relied on the hardcoded method. There's no issue with updating them to the fname system, otherwise. The hardcoded ones could then be removed from the template entirely. --Flargen (talk) 17:00, 1 June 2013 (UTC)