Roll20 uses cookies to improve your experience on our site. Cookies enable you to enjoy certain features, social sharing functionality, and tailor message and display ads to your interests on our site and others. They also help us understand how our site is being used. By continuing to use our site, you consent to our use of cookies. Update your cookie preferences .
×
Create a free account

Datastorage

1433991774

Edited 1433991911
I'm curious what everyone's using for their data storage. For large ticket items like magic item generators or item databases, sticking things in the state variable can only go so far. Handout/Character bio/gmnotes section pending on the size of the data (and server stress at the moment) can be incredibly slow. However without any true database storage, we're relegated to flat-document datastorage which .. leaves a bunch to be desired. So what has been your solutions? I've been using the gmnotes route and splitting databases into handouts or character journals where the textual information is the DB. Updates by the monitor logic control this content as listening to change events can lead to over processing. As this text area grows, the script will slow down as it needs to reprocess the entire text block. Rather than hot-loading in a variable, it accesses this text area atomically for each operation, re-parsing it each time. It does it it batches so 'Add Spell' would do several changes but require one access/operation and save. Most of the 'wait' time actually isn't in the parsing. but rather the get() method, which means firebase is getting hammered... and that it's a large bulk of information each transaction. I suppose I could keep it in a variable to save on command, but to keep everything synchronized, especially with the sandbox self-restarting during peak hours etc; this seemed to be the best route for data preservation as if the sandbox goes down at least the last operation was completed in a clean state. (unless the sandbox-self restart interrupts the middle of a JS message, in which case we're boned). So what's been your shick?
1433994577
The Aaron
Pro
API Scripter
So far, all I've needed has been the state object. I've not run into any problems with size or access speed, but I'm not storing a whole lot, maybe 500 rows or so of data. I've conceived of some systems whereby I would use a handout as a backing store, but have a live object I'm referencing and changing, with updates journaled to the state and applied on an interval. Thus far, I've not needed it.
1433998648
Lithl
Pro
Sheet Author
API Scripter
Ken L. said: For large ticket items like magic item generators or item databases, sticking things in the state variable can only go so far. Handout/Character bio/gmnotes section pending on the size of the data (and server stress at the moment) can be incredibly slow. Do not use state as a database for large amounts of data. state is re-written to the server something like once every 5 seconds. Use state to store state information. Ken L. said: However without any true database storage, we're relegated to flat-document datastorage which .. leaves a bunch to be desired. So what has been your solutions? For static data, I simply create objects/arrays in JS. For example, I'm currently working on a Pathfinder Adventure Card Game port, and I've got all of the cards represented as objects directly in my JS code.
1434036840
The Aaron
Pro
API Scripter
I think Ken was talking about dynamically accumulated data, not static data. That does bring up an interesting idea though. You could generate dynamic data that you want to become static, then output it in a form that's easy to import into your static database. =D
I'm currently building a framework to handle buffs in pathfinder that automatically updates character sheet attributes, and because I don't want those buffs becoming permanent in the event of an api crash I'm using a handout with the json object stored in the gm notes thats updated whenever a buff is add or removed. This is read into the script on "ready" so those buffs can be removed if they still exist.
The Aaron said: I think Ken was talking about dynamically accumulated data, not static data. Pretty much; for a static dataset i'm sure that would be fine, or even a 1-time-load separated dataset. The journaling seems like a good route but with the 'sandbox could die at any random moment' you'd lose some data or deal with sync issues (though JS messaging iirc should make this not an issue).
As far as I know the only real way to handle the sandbox-dying-sync issue would be to write to the handout but basically add a semaphore property that tells your code it hasn't been processed yet. Then you process the changes, and once those changes are done you rewrite the handout removing the semaphore or setting it to true. That way if the api fails during actually handling your changes you have an accurate representation of your data at the point of action, but (at least in my case) without any danger of it becoming permanent. This is not very efficient but such is life when storage options are limited.
iamthereplicant said: As far as I know the only real way to handle the sandbox-dying-sync issue would be to write to the handout but basically add a semaphore property that tells your code it hasn't been processed yet.... Good point, I haven't thought about that yet, I'm gunning on the supposition that in terms of processing, at least the JS message is completed (that being the write operation) before the engine is yanked. Javascript as I understand it (disclaimer) processes messages all on a single thread, hence why event-interrupts are safe as even if an event comes in, it'll process the current message it's on. Whatever process scheduler runs above the sandbox I'm guessing discreetly divides on message boundaries as that's the only thing that makes sense. Then again that depends on what the write logic is, being split into many smaller writes or whatnot where for the latter it wouldn't matter for the sema if the interrupt arrives between the write messages. A corollary to that is that is then I'm boned as well even if I'm saving after each operation, in fact, everyone would. Would be great if we had some dev-clarification on this.
Seconded on the dev clarification. The write logic problem currently exists in a state of 'who watches the watchmen'. Until we know more about how exactly that process works you pretty much just have to do backups...which require additional writes aka more points of failure or else manual backups before every change, which destroys the point. I'm banking on the api not failing on write and just making a copy of the handout-repo at the start of every game.