Roll20 uses cookies to improve your experience on our site. Cookies enable you to enjoy certain features, social sharing functionality, and tailor message and display ads to your interests on our site and others. They also help us understand how our site is being used. By continuing to use our site, you consent to our use of cookies. Update your cookie preferences .
×
Advertisement Create a free account

Discussion on Best Practices for Cascading Sheet Workers

1626586588

Edited 1626586607
Chris D.
Pro
Sheet Author
API Scripter
In a post on  Sheetworkers - setAttrs and silent option  The topic came up that writing a sheet with cascading triggers (such that a routine does a getattrs, a calculation and a setattrs, which causes one or more different routines to also trigger and do their own getattrs and setattrs, which might cause still more routines to trigger) was a bad idea.  I am wondering what people consider best practices as far as constructing a non-cascading sheetworkers goes. And whether it is worth taking a mostly working sheet with cascading sheet workers, and changing it to non-cascading.  Maybe my issues with regard to this discussion is that I have not really spent much time looking at how other sheets are put together, I just started putting my sheet together from scratch (using cascading triggers), but I want to clarify the technique that you are talking about to understand what would need to be done if I were to redo it).  So it seems to me (without ever having written a sheet that avoids cascades) that there are two ways to avoid cascades.  (a) On any Trigger from the player (not sheetworkers or api), Get everything, Recalculate Everything, write everything that changed.  or (b) basically recreate the trigger mechanism in your own code.  On a trigger, see what attributes are triggered by that, and see what those triggered events might result in, then see what is triggered by those things and what they might in turn trigger, repeat until run out of dependencies. Then do one getattrs, recalculate synchronously the various triggers in order. Do one write.  So basically everywhere that I currently have one routine and its triggers, I would have two routines (or two ways a single routine could be called. I would have something that reports its dependencies and its outputs, to be used by the master routine to walk my triggers. And the routine that actually calculates my values. If the user changes G, then the system checks and discovers that G is used in H and M. H is used nowhere else, but M is used to calculate N, O, and P. None of which are used elsewhere. So the system reads all the inputs to H, M, N, O, and P, Calculates them and writes the new values. Correct? So maybe my problem is that I tend to think that (a) is overkill and keep thinking that (b) is the appropriate way to do it, but that seems like more coding work. ie: (a) does much bigger reads and more calculations. (b) requires a master routine and a structure that the master routine can walk, more complex but certainly much less reading and calculating of values that don't actually change.  So what method do most people use? (or something else altogether?) Also, with ether (a) or (b), how do people do repeating sections? Lets say I change something outside of a repeating section such as "Dex", but attributes such as that are used inside of several repeating sections. Using both (a) or (b) do you just get a list of all repeating rowIDs and then get all things inside the repeating section that could depend on attributes? If using (a) or (b) then I would think that would made for some truly massive reads. Thanks for the help. 
1626624699

Edited 1626624914
GiGs
Pro
Sheet Author
API Scripter
I'm not really following method b, I don't understand it. It sounds very convoluted, but that may be because I'm not following it. The methods that I'm aware of: The cascade method, or the standard method: Have sheet workers that respond to each changing attribute, update attributes accordingly, and relyt on cascades to sort out multiple changes. For instance, if a stat changes, a sheet worker calculates its modifier. Then you'll have a sheet worker watching that modifier, and changing everything that depends on it, and so on. (I'll explain a common mistake with this method shortly, but you might have spotted it already.) The Scott method, basically your type (a). This is the method to use when your sheet is very complex. You divide the character sheet into blocks of related attributes (like Scott's example of Starfinder - one block is the character, another is the starship. In the DNF 5e sheet, one block might be the character, another might be the NPC.) The important thing is there is no overlap between the attributes in these sets, and each set contains attributes which affect many other things (like the way a stat modifier affects a lot of skills, saving throws,attacks, and so on.) For each bloc, you set up a single getAttrs, which grabs all attributes needed for that bloc, and then the worker that follows calculates all possible changes to attributes based on a change to any one of the attributes (so you need to build your worker with careful consideration of sequencing, but for most sheets this isn't really that hard), and the worker builds a list of changed attributes, and saves them all in a single setAttrs at the end. Scott's method is by far the most efficient, and for some sheets is probably essential. But IMO it adds a hurdle to building and maintaining sheets that for many sheet authors is unreasonable, especially since many sheets don't need that level of efficiency. That's why I usually recommend the first method, but I would add a caveat (remember the common mistake I mentioned earlier): Try to avoid having multiple different sheet workers with the same trigger. If you have, for instance, on('change:dex') , get everything that depends on dex and put it in that one sheet worker. Don't have multiple sheet workers with on('change:dex') . There are some sheets it's understandable to do things like this (when there's a lot of overlapping and complex interactions, and the on change line would get bigger and bigger so you end up with Scott's method without really planning to). But there's nothing wrong honestly if that happens - Scott's method is superior, after all. Just harder to build for people who are novices, and working on the sheets in their spare time. There's also the half-way between method , where you have some blocs on the sheet that you handle like Scotts method, but most of the sheet you don't.It's sometimes easy to grasp how to connect sets of attributes and handle them all together, and if it's easy, it's worth doing. The reason for avoiding multiple identical triggers is the synchronicity problem: when making sheets for roll20, it is essential to consider the lag that  setAttrs causes. It's not a bug in roll20 that it acts this way, it's an essential part of understanding network coding. If you can easily reduce the number of getAttrs and setAttrs calls in a sheet, you should. Combining sheet workers with identical triggers into one sheet worker is usually very easy (there are exceptions), and even novice coders should do this when they can. For repeating section, the answer is it depends. If your using method a, you use getSectionIDs to build a complete list of every row and every attribute in the section. If you have multiple repeating sections, this means you'll have to nest them inside each other because they are asynchronous. It's possible Scott has a better way to do this, but as I understand it, you cant use promises in sheet workers, so you either nest in this way or use callbacks (which is basically nesting them, and obscuring the fact). The repeating section issue is the biggest reason I don't recommend Scott's method to inexperienced coders, because handling them with this method does add a lot of complexity. If Scott has figured out a way to simplify this, I'm all ears. So yes, it can lead to truly massive reads. But remember, the read size is not an issue. GetAttrs can grab thousands of attributes in the same time it takes to grab one. The code is running on the client, and benefits from the speed of the client computer-  any pc capable of running roll20 at all will not break a sweat at anything you throw at it in the most complex character sheet. This is why getAttrs and setAttrs (and getSectionIDs) are different - they are limited by something much, much slower than your client computer - the internet, and congestion caused by hundreds or thousands or many more users at roll20's servers. But it is possible to separate out your repeating sections, and have dedicated workers for each of them. Thats the time I would say it is acceptable to have multiple change:dex triggers. You have one that covers all normal changes (all skills, saving throws, etc), and one that covers anything in the repeating section for attacks (where some weapons might use dex), and another for spells (if there are any that require dex rolls).
1626629016
Thanks Again Gigs and Scott for this clarification. In the case of our sheet, for several reasons, I find the idea of the single trigger and grouping of updates in a single function to be very difficult. Because of the game mechanics, and also because of the fact that we want not only to show the "static view" of the character, but also to display all the values including situation modifiers.. Let's give an example : In Earthdawn, if you take in one single hit more than a certain damage, you get a Wound, and a Wound gives you a -1 to ALL your rolls. For "mostly everything", we have a base value (that is your unmodified value), and a final value (that is the value that you will actually roll according to your current situation as for example a Wound), meaning that a change in the Wound attribute would more or less trigger an update of everything, meaning that if we don't want multiple update functions, with each having Wound as a trigger, we need more or less to have a function that recalculates all. Also, in another thread, I was speaking of a "tree" of dependencies, but I should more speak of a network, because it doesn't really have a "trunk". Currently we have mostly a cascading strategy where in order to set the on(change: ...) we look at all the inputs of the function (i.e.) all arguments of its getAttrs, and we can just build the on change by listing these arguments. For example our dec is part of out initiative, and also part of all our skills, so any change triggers these 2 functions... on("change:dex-step change:adjust-effect-tests-total change:misc-initiative-adjust change:initiative-mods change:dex-mods change:ip change:creature-ambush change:creature-ambushing", fn(updateInitiative) ); on("change:condition-impairedmovement change:condition-darkness change:initiative-mods change:initiative-mods-auto  change:dex-step change:dex-mods change:str-step change:str-mods change:tou-step  change:tou-mods change:per-step  change:per-mods change:wil-step change:wil-mods change:cha-step change:cha-mods", fn( updateRepStep )); If I understand well the recommendation, we should "cross-reference" this, and be more able to build from a trigger a list of everything it has to be updated, including the cascading of it, so something that looks more like this:   on("change:dex-step", fn(function ocDexStep(eventInfo) {     'use strict';     updateAttribFinal( "Dex" );        if( eventInfo.sourceType === "player" )     backfillOrigAttrib( "Dex" );   })); What remains difficult to understand for me, is in a very tight network of dependencies, how do you avoid that everything ends up in one single mega function. In the example above, If Dex-Step is changed, we go through all of our talents, to check the ones that are dex based to be updated... But we also go through our Initiative... If we wanted to go for a single trigger strategy, What would we do with say a Str-Step trigger, that obviously also needs to go through all talents, but doesn't influence Initiative. Is it one big mega function, and based on the eventInfo that triggered it, we keep going through subfunctions or not?
1626629050

Edited 1626638195
Scott C.
Forum Champion
Sheet Author
API Scripter
Compendium Curator
Sigh. Had a nice forum post set up here and then it deleted itself. But, I'm going to post an example of my technique to see if that helps. Most of the explanation is going to be in the comments in the code, but I'll make a few points at the end. //An array of objects, each of which holds the details on what fields need to be grabbed for a given repeating section. Note that this can have as many or as few as you need. const repeating_section_details = [ { section:'repeating_attacks' fields:['mod','misc','damage'] } ]; /* Expansion functions to update the cascades to match the sheet */ //Expands the repeating section templates in cascades to reflect the rows actually available const expandCascade = function(cascade,sections,attributes){ return _.keys(cascade).reduce((memo,key)=>{//iterate through cascades and replace references to repeating attributes with correct row ids. if(/^repeating/.test(key)){//If the attribute is a repeating attribute, do special logic expandRepeating(memo,key,cascade,sections,attributes); }else{//for non repeating attributes do this logic expandNormal(memo,key,cascade,sections); } return memo; },{}); }; const expandRepeating = function(memo,key,cascade,sections,attributes){ key.replace(/(repeating_[^_]+)_[^_]+?_(.+)/,(match,section,field)=>{ sections[section].forEach((id)=>{ memo[`${section}_${id}_${field}`]=_.clone(cascade[key]);//clone the details so that each row's attributes have correct ids memo[`${section}_${id}_${field}`].name = `${section}_${id}_${field}`; memo[`${section}_${id}_${field}`].affects = memo[`${section}_${id}_${field}`].affects.reduce((m,affected)=>{ if(section === affected){//otherwise if the affected attribute is in the same section, simply set the affected attribute to have the same row id. m.push(applyID(affected,id)); }else if(/repeating/.test(affected)){//If the affected attribute isn't in the same repeating section but is still a repeating attribute, add all the rows of that section addAllRows(affected,m,sections); }else{//otherwise the affected attribute is a non repeating attribute. Simply add it to the computed affected array m.push(affected); } return m; },[]); }); }); }; const applyID = function(affected,id){ return affected.replace(/(repeating_[^_]+_)[^_]+(.+)/,`$1${id}$2`); }; const expandNormal = function(memo,key,cascade,sections){ memo[key] = _.clone(cascade[key]); memo[key].affects = memo[key].affects.reduce((m,a)=>{ if(/^repeating/.test(a)){//if the attribute affects a repeating attribute, it should affect all rows in that section. addAllRows(a,m,sections); }else{ m.push(a); } return m; },[]); }; const addAllRows = function(affected,memo,sections){ affected.replace(/(repeating_[^_]+?)_[^_]+?_(.+)/,(match,section,field)=>{ sections[section].forEach(id=>memo.push(`${section}_${id}_${field}`)); }); }; //# Calculation Functions # These could be whatever you need for your system and these functions are just dummy placeholders for this demo const abilityMod = function(statObj,attributes,sections){ //returns the calculation for ability points }; const derivativeStatCalc = function(statObj,attributes,sections){ //returns the calculation for a derivative stat }; const attackTotal = function(statObj,attributes,sections){ //returns the calculation for an attack total }; //accesses the sheet and iterates through all changes necessary before calling setAttrs const accessSheet = function(attributes,sections,trigger){ const setObj = {};//initialize our object to hold our updates const casc = expandCascade(cascades,sections,attributes);//Create a copy of the cascades updated to match the repeating sections present on the sheet let trigger = casc[trigger];//Get the object from the cascade that matches the attribute that was changed //Now we need to work through the cascades of affected attributes. This will be another queue worker or burn down pattern let queue = trigger ? [...trigger.affects] : [];//initialize the queue with the affects array of the triggering attribute. while(queue.length){//While queue is not empty, keep working let name = queue.shift();//pull an attribute that needs to be worked off of the queue let obj = casc[name];//get the cascade object for that attribute setObj[name] = obj.calculation(obj,attributes,sections);//call the attribute's calculation attributes[name] = setObj[name];//update the attributes array to match the new value so it can be used in future calculations queue = [...queue,...obj.affects];//add any attributes that this attribute affects to the queue. rinse and repeat until nothing remains in the queue. } setAttrs(setObj,{silent:true});//Apply all of our changes silently. }; //An object that is used as a lookup for what a given attribute affects and how to calculate it. The object is indexed by attribute name. //The attribute name for repeating sections is a template for how that repeating attribute should be expanded //This allows us to avoid having a ton of if/else chains const cascades = { strength_mod:{//name index name:'strength',//name as a property for easier use when needed defaultValue:0,//the default value of the attribute for use in calculations in case something goes wrong type:'number',//What type the attribute is. This isn't necessarily needed, but I've frequently found it nice to have for logic on how to assemble attributes affects:['athletics','repeating_attacks_$X_mod'],//An array of the attributes that this attribute affects calculation:abilityMod//what function to call to calculate this value }, dexterity_mod:{ name:'dexterity', defaultValue:0, type:'number', affects:['initiative'], calculation:abilityMod }, strength:{ name:'strength', defaultValue:0, type:'number', affects:['strength_mod'] }, dexterity:{ name:'dexterity', defaultValue:0, type:'number', affects:['dexterity_mod'] }, initiative:{ name:'initiative', defaultValue:0, type:'number', affects:[], calculation:derivativeStatCalc }, athletics:{ name:'athletics', defaultValue:0, type:'number', affects:[], calculation:derivativeStatCalc }, athletics_rank:{ name:'athletics_rank', defaultValue:0, type:'number', affects:['athletics'] }, repeating_attacks_$X_mod:{ name:'repeating_attacks_$X_mod', defaultValue:0, type:'number', affects:[], calculation:attackTotal }, repeating_attacks_$X_misc:{ name:'repeating_attacks_$X_misc', defaultValue:0, type:'number', affects:['repeating_attacks_$X_mod'] } }; //Assemble our array of attributes/sections to monitor const toMonitor = Object.keys(cascades).reduce((m,k)=>{ if(!/repeating/.test(k)){ m.push(k); } return m; },[]); const baseGet = [...toMonitor,'sheet_version'];//assemble our array of attributes to be gotten repeating_section_details.forEach((obj)=>toMonitor.push(obj.section));//Add the repeating sections to the monitor array. Done here so they don't pollute the baseGet //Gets all the section IDs. This is function uses the queue worker pattern, sometimes also called a burn down const getSections = function(callback,getArray = [],trigger,sections = {},queue){ queue = queue || _.clone(repeating_section_details);//make a copy of the section details array so that we don't corrupt it for future calls. let section = queue.shift();//Get the details for a section to work getSectionIDs(section.section,(idArray)=>{//Actual call to the getSectionIDs sheetworker function sections[section.section]=[];//initialize the array for this section in sections idArray.forEach((id)=>{//iterate through the ids given by getSectionIDs sections[section.section].push(id);//push the row id to the array section.fields.forEach((field)=>{//iterate through the fields for the section getArray.push(`${section.section}_${id}_${field}`);//push the full repeating attribute name to the getArray }); }); if(_.isEmpty(queue)){//If there are no further sections to work through get the attributes and call the callback getAttrs(getArray,(attributes)=>callback(attributes,sections,trigger)); }else{//otherwise keep getting IDs for sections getSections(callback,getArray,trigger,sections,queue); } }); }; //Stores all the event listeners const registerEventHandlers = function(){ toMonitor.forEach((m)=>{ on(`change:${m}`,(event)=>{ getSections(accessSheet,[...baseGet],event.sourceAttribute);//Note that I'm expanding baseGet here so that we don't corrupt the global variable version of it for future events. }); }); }; //Initialize the event listeners. registerEventHandlers(); So, the points I want to make: 1) This is fast! All the cascade expansion and logic to figure out what to do is orders of magnitude quicker than waiting on a setAttrs to resolve, maybe 5 ms. Even the getSectionIDs calls are incredibly quick, on the order of 10's of ms per call, about like a getAttrs. So for a single repeating section like I have here, we're looking at a total calculation time (including the setAttrs resolution) of maybe 130ms. 2) This is actually pretty easy to expand and edit for future changes to the sheet. You'll need to add or edit an entry in the cascades, and then you might need to do the same for the repeating_section_details, and maybe you'll need a new calculation function. 3) We have no issues with synchronicity because all of our calculations are actually done within a single callback which allows us to mostly ignore the ugliness of working with asynchronous functions. 4) With some additional logic, we can actually do truly dynamic assignment of what attributes affect what other attributes to handle buffs/conditions/homebrew rules. For instance, a user could put in an input +@{strength_mod}, and because we have all the data we can suddenly make initiative dependent on strength as well as dexterity without needing an additional getAttrs. 5) GiGs is correct that this is overkill for many sheets, and even highly interconnected sheets will probably have a couple event listeners that lead down different paths; you don't need to lump your PC listeners and vehicle listeners together if they have wildly different attribute sets. However, I think that there are still many sheets that could benefit from this. Any sheet that approaches the complexity of the 5e sheet would probably benefit from this.
1626633425
GiGs
Pro
Sheet Author
API Scripter
Jiboux said: What remains difficult to understand for me, is in a very tight network of dependencies, how do you avoid that everything ends up in one single mega function. There might have been a miscommunication here. The goal isn't to avoid a single mega function, the goal is to create one. Or more likely, a series of subroutines in a larger function.
1626647761
Chris D.
Pro
Sheet Author
API Scripter
Thank you all very much, this is incredibly useful.  I think that converting from cascades to something like the above would be a major project that would take several days, but agree that the hardest part is just getting the first few parts working how you like them, and then beyond that it is just replicating the same transformation for all the many, many attributes in the sheet. A lot of hard detail work, but easy once it is figured out.  Thanks!
1626653291
Gig, Scott.. As said by Scott, thank you so much... I read Scott s code, and I still have my mind blown and so much to read before I understand it to the details, but I do see now what you were meaning, and do understand the benefit... Completely a different code architecture than what we have, but I see the skeleton there is for queueing, and then many of our current calculation functions could be rewritten for it...  Still a huge project, but we have quite a complex sheet, so it is true we could benefit from it. Have a great day !
1626679619

Edited 1626680659
Chris D.
Pro
Sheet Author
API Scripter
I think I will post some more thoughts on this, and maybe explain my "method B" better.  So lets take two examples, and people can tell me if I have the right or wrong of it. Example #1: a change is made to a variable with no dependents. Lets say somebody applies a buff or debuff to a characters  "Physical Defense". Example #1 Cascading: A cascading sheet would trigger on the change to "PD-buff", the sheetworker triggered would read in PD-nat and PD-buff and any other needed attributes and write a new "PD". Since no attributes trigger off of "PD", the process ends here.  Example #1 Scott Method: My understanding is that in the same circumstance, the Scott method would read all the values in the sheet (possibly including all the rows of a repeating section, possibly it would know that it does not need to do that), it would recalculate everything, then it would write the one and only value (PD) that actually got changed in the recalculations.  Expected Result #1, both methods are fast, I expect Cascading would be slightly faster,  but probably not noticeably so. If there is even one cascade, the Cascading method is probably slower.  Does the Scott Method code above recalculate all the repeating sections every time, no matter what was changed? Because if it does, then the cascading method would be comparatively faster still.  Example #2: a change is made to a variable with many dependents (such as "Dex") and many of the variables that depend upon Dex have dependents themselves, and those dependents have dependent variables, unto several generations.  For example a change to Dex-Orig, causes Dex to change which causes PD-Nat to change, which in turn causes PD to change. Another branch of dependences might have a change to Dex causing everything in several repeating sections to be reexamined.  Example #2 Cascading: This is the worst case scenario for cascading. Dozens of routines trigger simultaneously in their own threads, the results of these in turn trigger dozens of more routines. Some routines trigger two or three times because more than one of their triggers get updated etc. It can take a noticeable amount of time for all the triggered updates to cascade though the system. Hopefully there will not be a race condition of any sort.  Example #2 Scott Method: This is pretty much identical to the minor change of example 1. Every attribute gets read, Every attribute gets calculated, In this case a great many attributes get written, at which case the process ends.  Result, Scott Method is clear winner.  Method B So what I called Method B is kind of a mix of the two systems, but basically the user sheetworker code replicates the systems triggering system. Which is to say the data structure is navigated twice, once before the read to find out exactly what attributes need to be read, and again after the data lookup to actually do the calculations, but only the code sections affected by the triggering change are run. If the system determines that a recalculation is not needed, it is not done.   Example #1 Method B: The system triggers on the change to "PD-buff". The system uses its data structure to determine that the only code section that triggers off of PD-buff is "updatePD", that the inputs to updatePD are PD-buff, PD-nat, and anything else needed. It also looks up that the only output to updatePD is "PD". It then does the exact same check on PD that it did for PD-buff, and discovers that a change to PD will not trigger any further code sections. So only the 4 values needed are looked up. Once the attributes are read, one call is made to updatePD.  The system then doublechecks that PD did indeed change, and that it does not need to do anything with that change (does not need to calculate any further values). PD is then written and the routine ends. Expected Result, probably about as fast as Cascades, possibly a bit faster than Scott method, since it does some preprocessing at the front, but skips a thousand useless calculations.  Example #2 Method B:  The system triggers on change to "Dex-orig", does its checks and discovers that this can cause a change to "Dex". It does its checks on Dex and discovers that this can trigger many update code sections, including ones that update repeating sections. It walks its data structure and discovers all the hundreds of attributes it will need to lookup in order to process all the many triggers it can foresee in the future. It does those lookups. It then starts its recalculations. Changing Dex-orig does indeed change the value of Dex, so it adds all the sections of code that get triggered on changes to Dex to its list of routines to walk though. During it's walk though it notes that while the change to Dex might have resulted in a change to PD-nat, in this particular case the change was not great enough to actually change PD-nat to a different value, so it does not recalculate PD, since none of its triggers actually changed.  So once again, I have never implemented ether the Scott Method nor Method B, These are both just ideas I had for how the code might work if I was to get rid of my cascading method. My query was just to find out what methods people did use, and their pros and cons. It is extraordinarily useful to see an example of the Scott method. Very fine code.  It seems to me that Method B is somewhat more complex to implement the very first time while you are working out the framework, but once the complexities have been ironed out once, it is no more difficult to add additional attributes, or to replicate in other sheets.  What are peoples thoughts? Is the "calculate it all every time" overhead really so low that striving for further efficiencies is pointless? Do you think that replicating triggers synchronously in your own code would result in so much overhead that it would probably actually be faster to recalculate everything every time? Thanks for your thoughts and guidance. 
1626688614

Edited 1626688635
Just to put this in perspective, this is the result of running through every attribute on the Roll205e sheet, finding the attributes which end in _mod, and doing some pointless math on them, then storing them in an output {} object: This is on a Surface tablet, not exactly a powerhouse. I'm currently on a remote site, so it's probably not quite a fair comparison, but from here a setAttrs takes around 50ms for a round trip. You'd need to do an awful lot of client-side processing to make up for even 2 setAttrs. Although my latency from the middle of the desert is far from ideal, plenty of people using Roll20 are using less-than-ideal machines or connections, so if you can make the sheet more usable for them, great.
1626693701

Edited 1626693728
Scott C.
Forum Champion
Sheet Author
API Scripter
Compendium Curator
So, you've mostly got my method down, but it only sets what needs to be set. It grabs everything at the start, but then only updates what actually needs to change. This includes repeating sections. Other than the fact it is greedy with getAttrs, it's actually closer to your option B. In my current iterations of it, it doesn't check to see if a change is large enough to actually affect downstream attributes, but that is a good idea that is likely easily added by by wrapping the queue addition in an if statement. Indeed, option B was (and is) my goal with this sort of scaffold, but I haven't yet developed the data structure that allows me to know what's going to affect what ahead of time fully. For instance if what attribute affects an attack is dependant on a drop down, you need to do a getAttrs to know that. So I just grab everything and then sort out what was actually needed and just don't use the rest.
1626818430

Edited 1626819465
Chris D.
Pro
Sheet Author
API Scripter
At the risk of going off topic again, it occurs to me that I am unaware of any good "sample" sheet templates. I have glanced through a number of sheets, but they always have so much going on that I can't see the structure because of all the system specific fields. I mean people should not be told to look at (such and such sheet) and to ignore the 95% of the sheet that is (such and such) specific. Just a clean almost blank sheet.  It would be very, very nice if somebody prepared an elaborate and elaborately commented, almost blank but fully functional character sheet that showed off all the best practices. One with a header, and  a logo, and tabs, and interesting css, and all the things a real character sheet ought to have. But only maybe one or two fields and a button, just enough to show off what an elaborately complex sheetworker stucture could do with those two fields.  And maybe a short optional API that was also all the structure a real big API that was supposed to do lots of stuff ought to have (with tries and catches and logging of caught errors, etc) but very little actual sheet processing, just a sample to show where the guts should go and what they ought o look like and do. Possibly including chat menu building and processing, and whatever helper functions you usually have, etc.  Then take these fully functional files and post them to github, and edit the "Building character sheets" wiki page and the "Sheet Author Tips" and a few other pages to say "Hey, if you don't want to start your sheet from a black page", Here is an advanced, best practices, framework that you can use as a starting point. Just add your own code to this framework.  Thoughts? Volunteers?
1626819091

Edited 1626819356
GiGs
Pro
Sheet Author
API Scripter
One of the problems there is every sheet has different needs, and they are being built usually by people learning their way - which applies to both me and Scott, too. So there is no sheet like that, and creating one would be a mammoth undertaking. Also, no one likes writing documentation, and it would need a lot! PS: also, best practices can also be a matter of opinion - at least to a degree. So any single document that attempts to be the perfect example of sheet design will likely have detractors disagreeing and pointing out other approaches. There probably isn't a single best approach for all situations.
1626820913
Chris D.
Pro
Sheet Author
API Scripter
True, but my thinking is that beginners who don't feel up to writing a non-cascadding sheetworker, might feel better about it if they have some handy giants shoulders to stand upon. Rather than writing the template from scratch, I was thinking that somebody who thinks they have a pretty good sheet could just remove 95% of their code and just leave a structure. My idea is not to create the ultimate in sheets, in fact the template would not be a sheet as such, just a "hello world" type template, an empty structure to give people a much more advanced starting point as opposed to a blank screen.  I would do it myself, except my code is rather crude and not best practices. My sheetworkers cascade. Maybe if nobody has done this by the time I get around to writing my Method B routine I will do a template also. 
1626826849

Edited 1626827489
Personally, I would firmly push everyone in the direction of getting comfortable with Promises and async/await, and use the async versions of the sheetworker scripts ( originally by Onyxring, I believe ) as the primary "best practice" template. The code is so much easier to read and understand without callbacks writhing around your code like a bunch of cut snakes. Honestly, even if you're not comfortable with Promises in JS, the basic use is easy enough to follow and (IMO) more intuitive than how an async callback works in otherwise-synchronous code. The condensed version I've been using: // ASYNC CORE FUNCTIONS - async sheetworker functions Modified from Scott C's version of Onyxring's code const as = (() => { const setActiveCharacterId = function (charId){ let oldAcid = getActiveCharacterId(); let ev = new CustomEvent( "message" ); ev.data = { "id" : "0" , "type" : "setActiveCharacter" , "data" : charId}; self.dispatchEvent(ev); return oldAcid; }; const promisifyWorker = (worker, parameters) => { let acid = getActiveCharacterId(); let prevAcid = null ; return new Promise((res,rej) => { prevAcid = setActiveCharacterId(acid); try { if (worker === 0 ) getAttrs(parameters[ 0 ] || [],(v) => res(v)); else if (worker === 1 ) setAttrs(parameters[ 0 ] || {}, parameters[ 1 ] || {},(v) => res(v)); else if (worker === 2 ) getSectionIDs(parameters[ 0 ] || '' ,(v) => res(v)); } catch (err) {rej(console.error(err))} }). finally (() => setActiveCharacterId(prevAcid)); } return { getAttrs(attrArray) { return promisifyWorker( 0 , [attrArray])}, setAttrs(attrObj, options) { return promisifyWorker( 1 , [attrObj, options])}, getSectionIDs(section) { return promisifyWorker( 2 , [section])} } })(); Then a basic get/setAttrs function looks like: const attrChange = async () => { let output = {}; let requiredAttrs = [ 'stamina' , 'dexterity' ]; let attrs = await as.getAttrs(requiredAttrs); // Do stuff to attributes, save to {output} await as.setAttrs(output); // Do stuff after setAttrs completes } I find that much more readable, especially if you do need multiple get/set operations - using 'await' will ensure your code runs from top to bottom, just the way a human reads without having to try to follow someone else's callback hell trail. And, of course, you can leave off the 'await' or tack a .then() function on the end if you want to mess around with the order and timing - it doesn't have to be run in sequence, but at least you can control it when you need to. I'd coded most of my first sheet when Scott C pointed me in the direction of the async sheetworkers - I immediately rewrote the code and was very happy with the improvement. I'd actually tried to promisify the functions myself, but failed due to the "no active character in sandbox" error which Onyx solved.