Roll20 uses cookies to improve your experience on our site. Cookies enable you to enjoy certain features, social sharing functionality, and tailor message and display ads to your interests on our site and others. They also help us understand how our site is being used. By continuing to use our site, you consent to our use of cookies. Update your cookie preferences .
×
Create a free account

[Important!] API Changes

July 17 (10 years ago)
Riley D.
Roll20 Team
As most of you may know, we've had some issues lately with the API going down during peak times on Roll20. The API server is a "shared space", meaning that although the actual execution of scripts are isolated from each other, we're all sharing the same memory, CPU, etc. We've always had a basic "heartbeat" method employed previously to keep scripts from causing infinite loops that locked up the server, but recently this seems to no longer be an effective enough method of policing scripts' usage.

So, today I've made a few changes to the API. As with anything API-related, there may be bugs to work out over the next few days while we see this in use across so many different types of campaigns and API scripts, but please bear with us as we work to make the API server more robust for everyone.

Better Error Messages

You should now get two different error messages when your API script sandbox unexpectedly quits. One will say something generic like "Sandbox shut down due to unexpected error.", then another should appear with more detail, such as "Heartbeat failed, infinite loop most likely culprit." That should hopefully give us more information to work with as we attempt to figure out what's going wrong.

More Stringent Resource Limitations

As I said, the server is a shared space and as such one of our jobs is to "police" scripts and make sure that no single campaign is using an overabundance of resources. We have now implemented a system which can track the CPU and Memory usage of your campaign's sandbox. Honestly I don't want to go into a ton of technical detail in terms of "how much memory and CPU you get" since the goal here isn't to make everyone start treating their sandbox like a little computer, it's more just to help us identify if there are scripts that are using way more memory/CPU than we would expect them to be using, and then when that happens to work with the GM of that campaign to hopefully figure out what's going on.

So, if you get an error for your API along the lines of "Too much memory" or "Too much CPU", please let us know by responding here so we can start trying to figure out what's going on with that. Note that both the CPU and Memory are averages taken over about a 1 minute window, so small "bursts" of CPU will not trigger it, only long, sustained high CPU usage (which is probably indicating that you aren't using the API properly in the first place).

Hopefully these new changes will at least help us start digging into the underlying issue so we can make sure the API is able to scale well with Roll20's growth going forward.

Thanks, and let me know if you have any questions about this!
July 17 (10 years ago)
Riley D.
Roll20 Team
Also, note that while I was playing around with things this morning it's entirely possible your API script may have gotten an "error" triggered on it, please feel free to just ignore those and click "Save" to re-enable your API script. You API Scripts MAY NOT WORK until you go and re-save them! Then if you get the error again please let us know.
Something seems off all of the sudden, or I don't understand what could cause an infinite loop, which is quite possible as I am a novice, however everything I run now is generating "Heartbeat timed out, infinite loop most likely culprit."

I actually commented out everything except for the "on("chat:message", function (msg)" and a simple log() statement and it still generate the Heartbeat timed out.
July 17 (10 years ago)
Is there a way for you to see which scripts are the biggest culprits?
July 17 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
It does seem like it should at least know the call stack at the time the error occurs. The line numbers would probably be off because of the way the scripts are concatenated together for execution, but the function names should be revealing, provided you've done the slightest functional decomposition.
After hitting "save" again on all the scripts I have, the API gives me this error a few moments later "Error downloading scripts (probably no scripts exist for campaign.)" Even before it errors out, none of the scripts seem to work at all.
July 17 (10 years ago)
My test campaign with just my latest powercard script is stuck on spinning up a new sandbox.
July 17 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
Lake: When I've had that error in the past, I found refreshing the scripts page completely and then saving a script cleared it up.
I don't think it is just Lake's problem, this is how it is acting for me as well in all campaigns (even with scripts that I haven't wrote and have worked for months). If you click save, it will wait maybe 10-15 seconds, then the "Restarting sandbox due to script changes..." starts up... after quite some time (minutes!) the "Spinning up new sandbox..." message appears, then minutes later you get the "Error downloading scripts (probably no scripts exist for campaign.)" message.
I'm having that problem as well. No scripts are working for me at all.
July 17 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
Yeah, I'm getting this now too.
July 17 (10 years ago)

Edited July 17 (10 years ago)
Riley D.
Roll20 Team
Okay, I'm looking into it. As I said, it may be a bug I introduced with the new changes.

EDIT: If you are having a problem with a campaign, please include a link tot he details page for the campaign so I can take a look at it specifically as well. Thanks!
Assuming this is the link you're needing, Riley?

https://app.roll20.net/campaigns/details/123013/eberron-v2-dot-0
July 17 (10 years ago)
Riley D.
Roll20 Team

HoneyBadger said:

Is there a way for you to see which scripts are the biggest culprits?

Not right now, no, but if there are any that are causing multiple campaigns to stop working due to memory/CPU it will become apparent here pretty quickly I guess.
July 17 (10 years ago)
Riley D.
Roll20 Team

Lake / James said:

Assuming this is the link you're needing, Riley?

https://app.roll20.net/campaigns/details/123013/eberron-v2-dot-0

Yes, that's the one, thanks.
July 17 (10 years ago)

Edited July 17 (10 years ago)
Riley D.
Roll20 Team

Lake / James said:

Assuming this is the link you're needing, Riley?

https://app.roll20.net/campaigns/details/123013/eberron-v2-dot-0

I just tried yours and it seemed to work fine for me (I pressed "Save", then loaded up the game in another tab, and the sandbox started up fine). Did you try running the campaign in the last few hours and it gave you that error? Or did you just see that error when you went to the API scripts screen and stop there?

Note that I didn't actually test any of the functionality of the scripts themselves since I don't know what you have all in there and what they're supposed to do, so let me know if you encounter problems when you actually go to use the script's functionality. But at least the "no scripts for campaign" thing seems to be resolved...
July 17 (10 years ago)
Riley D.
Roll20 Team

Kevin said:

Something seems off all of the sudden, or I don't understand what could cause an infinite loop, which is quite possible as I am a novice, however everything I run now is generating "Heartbeat timed out, infinite loop most likely culprit."

I actually commented out everything except for the "on("chat:message", function (msg)" and a simple log() statement and it still generate the Heartbeat timed out.

Get me a link when you can because I'm interested to see what's going on there.
July 17 (10 years ago)
Can I get any idea if my campaign is getting anywhere near the boundaries of "CPU usage" for the servers?

Honestly my campaign has been a project which has been worked on by a team for several months now, so this is a bit of a concern for myself and all my players.
July 17 (10 years ago)
Riley D.
Roll20 Team

Michael A. said:

Can I get any idea if my campaign is getting anywhere near the boundaries of "CPU usage" for the servers?

Honestly my campaign has been a project which has been worked on by a team for several months now, so this is a bit of a concern for myself and all my players.

Yeah I was just looking at yours now actually. I am going to send you a PM.
July 17 (10 years ago)

Edited July 17 (10 years ago)
Riley: https://app.roll20.net/campaigns/details/414779/pa...

Currently the script that I have in there is completely commented out except for the basics necessary to run the !import command.

EDIT: Right this second it seems to be functional, so I am going to uncomment the script and see if the problem is reproducible.
July 17 (10 years ago)
Riley D.
Roll20 Team

Kevin said:

Riley: https://app.roll20.net/campaigns/details/414779/pa...

Currently the script that I have in there is completely commented out except for the basics necessary to run the !import command.

It looks like it's working okay now, right?

Riley D. said:

Lake / James said:

Assuming this is the link you're needing, Riley?

https://app.roll20.net/campaigns/details/123013/eberron-v2-dot-0

I just tried yours and it seemed to work fine for me (I pressed "Save", then loaded up the game in another tab, and the sandbox started up fine). Did you try running the campaign in the last few hours and it gave you that error? Or did you just see that error when you went to the API scripts screen and stop there?

Note that I didn't actually test any of the functionality of the scripts themselves since I don't know what you have all in there and what they're supposed to do, so let me know if you encounter problems when you actually go to use the script's functionality. But at least the "no scripts for campaign" thing seems to be resolved...

It seems to be working fine, but yes, about an hour ago when I first posted is when I got that error and came here to check and see if things were down or if anyone else was having issues. So I went and tried what Aaron suggested, and made sure I saved each of the scripts, and then came back to give you the link. I stopped messing with it about 30 minutes ago, and it seems to have started working between then and now.
yes, it seems to be working right now again.
July 17 (10 years ago)
Riley D.
Roll20 Team
Okay, my *guess* is that there was some sort of system-wide failure, and of course it's one of those failures where if you're not around to see it happen you can't figure out what went wrong. For now, things seem to be working properly again.

That said, I am definitely starting to see some patterns (not to do specifically with the people I was just helping, just in general) that we are going to have to address in the near future to keep the API ecosystem healthy and working. There are some types of scripts I'm seeing looking through these logs that really are pretty close to what I would consider "abuse" of the system, or at the very least they aren't respecting the way that the system was designed to be used.

I think that there was a time when there were only a handful of people really using the API, and for the most part scripts were pretty simple. But as more people start using it and the scripts continue to increase in complexity, we're going to have to set some guidelines going forward for what's acceptable and what isn't. I'd prefer to just leave it as an "advisory" and not have to put in any actual restrictions since honestly that just takes more time out of my day I could spend making great new features, but we'll see how it goes.

I will probably make a new post tonight with thoughts about that. 95%+ of you will have nothing to worry about, and again no one is in trouble or anything, this is just a growing pain that we need to address before the whole thing collapses.
July 17 (10 years ago)
Lithl
Pro
Sheet Author
API Scripter

Riley D. said:

There are some types of scripts I'm seeing looking through these logs that really are pretty close to what I would consider "abuse" of the system, or at the very least they aren't respecting the way that the system was designed to be used.

I admit, I'm kinda curious what that's supposed to mean. I'm having a hard time thinking of a means to abuse the system that hasn't already been closed. I mean, I think it's still possible to poison your object data, but that gets fixed when you start a new session and I can't fathom how making your objects more difficult to work with locally would harm the system.

July 17 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
Are you talking about abusing capabilities or are you talking about ethical concerns?
July 18 (10 years ago)

Edited July 18 (10 years ago)
Riley D.
Roll20 Team
There are 3 main types of scripts I'm seeing that are causing (from what I can tell) the lion's share of the current issues we're experiencing:

Scripts That Are Too Large, Usually "Databases"

There are people that are running 50,000+ lines of code in their campaigns. That might literally be more lines of code than it takes to run large parts of Roll20. The usual pattern I'm seeing here is with people trying to shoehorn "databases" into their scripts -- for example there's one floating around that is 50,000+ lines of code that is just a giant database of Pathfinder spells. On top of that, the database is being saved to the state object, so every 5 seconds the system is having to write something like 10MB of JSON data to the disk. Not super performant.

Please consider that when you have 10MB+ of script source code, that is a lot of data to move around, compile, and execute. In addition, your entire source code tree shouldn't be saving itself to the state object.

Scripts That Are Importing Things via the Notes Fields

The notes field is not meant to be used as a bridge between the API server and the outside world, period. I let this go for a bit because it seemed innocent enough, but again now we have people who are importing 10MB of XML data into their Roll20 Campaigns. The system simply isn't designed to support that.

Scripts That Are Using the Notes Field Like a UI

There is no "delta update" for the notes, GM notes, bio, etc. So when you re-update the entire notes field 5 times within a 10 second period, you are sending all of the entire data for the notes field down the pipe to every client in the game 5 times in 10 seconds. That is a lot of data. The notes field is meant for notes. I would think the most frequently it would be updated in a real-usage scenario would be a few times per minute. Certainly not 5 times per second.

---

As you can see, each of these is a case that is trying to around a "limitation" of the API. The notes field is clearly not meant to be used as a UI, nor as a data transfer mechanism. And while it's okay to have campaigns with lots of scripts, when it's taking your campaign scripts page 30 seconds to load because you have 10MB of source code that is mostly just spell definitions, clearly you're working outside what the system was intended for.

The most likely measures that we will be putting into place:

1) Restricting the total size of the state object. It will be plenty large for most purposes but it was never intended to hold 10MB+ of information (especially info that doesn't even really change).

2) Adding a delay to writing to the notes field. The first update you do to the notes field will be instant, but then there will probably be some sort of a delay (30 seconds or a minute) until the next "write" is sent to the clients. For most purposes you won't even notice this, but it will keep people from using the notes field like some sort of UI.

Again, there's nothing to be upset about here since we've never given guidance on these things before. I knew some of them could potentially be issues but the time has simply come where the system can't handle some of these things anymore (as you've all seen by the downtime recently). I think that it should be clear that all of these uses were outside the bounds of what the system was intended for, and if it wasn't clear hopefully this clarifies it.

Feel free to let me know if you have any questions on this. I will most likely be putting those restrictions into place early next week. In the mean time again please let me know if you run into the CPU or Memory restrictions as I'm interested to see if there are any other types of scripts I'm not aware of that are causing issues.

Thanks!
July 18 (10 years ago)

Edited July 18 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
Couple questions I know will come up, so I'll bite the bullet and ask them:

  1. Is there any chance that you'll be introducing a less impactful method of uploading data to the API without editing a script?
  2. Are there any plans for introducing a means of interacting with the scripts via a custom UI?
  3. If someone wanted to have a database style of script, what would be the appropriate method?

(Edit: And thanks for the clarity, I'm sure we all appreciate the frank response!)
There's a script which uses the gmnotes field to import monster statistics. However, the XML is never more than 100KB. Is that unacceptable too?
July 18 (10 years ago)
Riley D.
Roll20 Team

Aaron said:

Couple questions I know will come up, so I'll bite the bullet and ask them:

  1. Is there any chance that you'll be introducing a less impactful method of uploading data to the API without editing a script?
  2. Are there any plans for introducing a means of interacting with the scripts via a custom UI?
  3. If someone wanted to have a database style of script, what would be the appropriate method?

(Edit: And thanks for the clarity, I'm sure we all appreciate the frank response!)

1. At some point, yes. That's always been the intent. I know that people want to get info in there, but this way is just not going to scale well as we've already seen.

2. Not currently, no. I've toyed around with the idea of letting API scripts add "hooks" to the UI, but I don't have any definite plans on that.

3. I'm not sure if there is one currently.

Jarret B. said:
There's a script which uses the gmnotes field to import monster statistics. However, the XML is never more than 100KB. Is that unacceptable too?

You'll note that the "importing" thing is the only one I don't have a plan to put in a hard restriction on currently. If people are aware of the issue going forward and everyone can be smart (e.g. only importing small amounts of data, and obviously not importing your data at 8 PM on a Saturday), then we might be able to let this one slide until we have a better method. But if it continues to be an issue where folks are importing 10MB of data at once during peak hours, I will just have to disable it or put a cap on the size of the notes or something.
July 18 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
2. Not currently, no. I've toyed around with the idea of letting API scripts add "hooks" to the UI, but I don't have any definite plans on that.
Would you consider adding a syntax to /direct anchors that allows them to execute an API command? Something like:
<a href="sendChat:!cal next">Tomorrow</a>
would let us send UI without adding anything more than typing the commands already does.

Of course, to be really useful, we'd need a /direct whisper.
July 18 (10 years ago)
Riley D.
Roll20 Team
I don't really want to turn this into a feature suggestion thread...I'm not opposed to the idea, but at this point my main goal is "get things to a state where they don't crash every night", then we can start working on replacing lost functionality.
July 18 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
Fair enough. Thanks! =D
July 18 (10 years ago)
Chad
Sheet Author
I just started getting errors on scripts which always run instantly (They just set HP to 0). Mean anything to you?

events.js:72
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE
at errnoException (net.js:904:11)
at Server._listen2 (net.js:1023:19)
at listen (net.js:1064:10)
at Server.listen (net.js:1132:5)
at Sandbox.start (/home/symbly/www/d20-api-server/sandcastle/lib/sandbox.js:35:15)
at Object.<anonymous> (/home/symbly/www/d20-api-server/sandcastle/bin/sandcastle.js:11:9)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
I've started getting heartbeat failed error messages on my copy of the automatic initiative rolling script. Can't see any obvious reasons on my end why this should be happening.

https://app.roll20.net/forum/post/207066/script-auto-initiative/#post-220693
July 18 (10 years ago)
Lithl
Pro
Sheet Author
API Scripter

Riley D. said:

Scripts That Are Too Large, Usually "Databases"

There are people that are running 50,000+ lines of code in their campaigns. That might literally be more lines of code than it takes to run large parts of Roll20. The usual pattern I'm seeing here is with people trying to shoehorn "databases" into their scripts -- for example there's one floating around that is 50,000+ lines of code that is just a giant database of Pathfinder spells. On top of that, the database is being saved to the state object, so every 5 seconds the system is having to write something like 10MB of JSON data to the disk. Not super performant.

Please consider that when you have 10MB+ of script source code, that is a lot of data to move around, compile, and execute. In addition, your entire source code tree shouldn't be saving itself to the state object.

Is the problem mostly large codebases, or storing all of that in state? For example, I've got a half-finished Fiasco automation script which stores the playset tables as objects, but they're simply global objects in the script, not pushed into state. Each playset is 144 strings (each ~1 sentence) plus the boilerplate to get the object structure.

Further, if disabling a script tab reduces the memory footprint, my script setup has each playset's table data in a separate tab, making it easy to disable them individually. OTOH, if the VTT simply "hides" the disabled tabs at runtime, that wouldn't help any.

July 18 (10 years ago)
Riley D.
Roll20 Team
@Chad @Drew I think those should be fixed.

July 18 (10 years ago)
Riley D.
Roll20 Team

Brian said:

Riley D. said:

Scripts That Are Too Large, Usually "Databases"

There are people that are running 50,000+ lines of code in their campaigns. That might literally be more lines of code than it takes to run large parts of Roll20. The usual pattern I'm seeing here is with people trying to shoehorn "databases" into their scripts -- for example there's one floating around that is 50,000+ lines of code that is just a giant database of Pathfinder spells. On top of that, the database is being saved to the state object, so every 5 seconds the system is having to write something like 10MB of JSON data to the disk. Not super performant.

Please consider that when you have 10MB+ of script source code, that is a lot of data to move around, compile, and execute. In addition, your entire source code tree shouldn't be saving itself to the state object.

Is the problem mostly large codebases, or storing all of that in state? For example, I've got a half-finished Fiasco automation script which stores the playset tables as objects, but they're simply global objects in the script, not pushed into state. Each playset is 144 strings (each ~1 sentence) plus the boilerplate to get the object structure.

Further, if disabling a script tab reduces the memory footprint, my script setup has each playset's table data in a separate tab, making it easy to disable them individually. OTOH, if the VTT simply "hides" the disabled tabs at runtime, that wouldn't help any.


Well, the *main* problem is the state saving object. That's what that's the only thing we're looking to put a hard limitation on. Does your Fiasco script have 50,000+ lines of objects? If so you might want to rethink doing that, if not that it's fine. Again these are extremes we are talking about.

Disabling the script does completely disable it meaning that the source code is never sent to the API server when your campaign boots up, so it's like the script was never entered in the first place.
July 18 (10 years ago)

Edited July 18 (10 years ago)
Riley D.
Roll20 Team
As a general update, I am continuing to monitor things today. Right now my main focus is on re-working the "master process" to make sure that there aren't any memory or CPU leaks...what I'm seeing is things start off fine, then after a period of 6 - 8 hours the CPU is building up to a point where the whole service starts running slow (which is what is leading to the "Heartbeat" errors for scripts that normally work fine). I have rigged up a testing system that lets me simulate spawning 1,000 API servers back-to-back so hopefully I can figure out what's going on. I'll keep you all posted.
July 18 (10 years ago)
Lithl
Pro
Sheet Author
API Scripter

Riley D. said:

Well, the *main* problem is the state saving object. That's what that's the only thing we're looking to put a hard limitation on. Does your Fiasco script have 50,000+ lines of objects? If so you might want to rethink doing that, if not that it's fine. Again these are extremes we are talking about.

~2,000 lines of actual script (not quite finished, but the end result will be close to that) and ~16,000 lines of playset object data using all currently available playsets (although the campaign currently only has one playset for testing, at 276 lines). So, around 36% of the Pathfinder script you found, when and if I complete it, although Bully Pulpit games occasionally releases additional ones. (They used to have a playset of the month thing going, but that appears to have stopped.)

July 18 (10 years ago)
Riley D.
Roll20 Team
But you would only have one playset script enabled at once? That would be fine. 18,000 lines total is *probably* okay as long as it's not being saved to the state object. Again, if it's not taking 30 sec+ to load the script editor page, it's probably okay.
July 18 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
Would it be better to break long scripts like this up across multiple script tabs, or does that have any effect at all? (I assume it doesn't, but just making sure...)
July 18 (10 years ago)
Riley D.
Roll20 Team
Doesn't make a difference, it's based on the total size of all enabled scripts merged together.
July 18 (10 years ago)
Lithl
Pro
Sheet Author
API Scripter

Riley D. said:

But you would only have one playset script enabled at once? That would be fine. 18,000 lines total is *probably* okay as long as it's not being saved to the state object. Again, if it's not taking 30 sec+ to load the script editor page, it's probably okay.

The original intent of the campaign includes some API commands to select the playset for a single game, which would necessitate all of the playsets being enabled. That would let people in the campaign hop online and run a game without my presence. However, if the "database" objects were an issue, I could simply trim off some of the automation, and require my presence, selecting the playset manually by enabling/disabling script tabs.

July 18 (10 years ago)
Guess I jumped the gun on my scripts prematurely. Most of them at least. Haven't tested the importers, but they only deal with 100'ish kb of data at a time and that data is then deleted from the token.
July 19 (10 years ago)
Riley D.
Roll20 Team

HoneyBadger said:

Guess I jumped the gun on my scripts prematurely. Most of them at least. Haven't tested the importers, but they only deal with 100'ish kb of data at a time and that data is then deleted from the token.

Yeah the importers may be an issue if people abuse them (see my post up-thread on that), but I don't think any of your others are a problem at all.
July 19 (10 years ago)
Riley D.
Roll20 Team
So far tonight things are going smoothly *knock on wood*. With the recent round of changes to the master process, the API server has been running all day without an issue. Right now during peak time we are seeing heavy usage in terms of number of campaigns that are actively running with API scripts, and the master process CPU levels and memory usage are holding steady (not increasing like they were before). So assuming all continues well with that trend, we may have a good handle on this.

In addition, only 1 campaign has needed to be killed due to memory usage, and only 3 campaigns have needed to be stopped due to CPU usage. Of those, 2 were barely over the limit (and we may consider increasing it), but one was using something like 275% of CPU so not sure what was going on there but I'm glad the system caught that and stopped it. This is out of hundreds of campaigns that have been run since those changes were put into affect, so as I said, the vast majority are unaffected by the new changes, which is good. If you are one of those who received a memory or CPU error and need help figuring out what's going on, feel free to reach out via PM and let me know. Thanks!
July 19 (10 years ago)

Riley D. said:

HoneyBadger said:

Guess I jumped the gun on my scripts prematurely. Most of them at least. Haven't tested the importers, but they only deal with 100'ish kb of data at a time and that data is then deleted from the token.

Yeah the importers may be an issue if people abuse them (see my post up-thread on that), but I don't think any of your others are a problem at all.

Cool. I can't think of any monsters that would even reach more than a megabyte or two. Maybe an epic level character might... but all the important data just gets written to attributes, macros, and then erased from the token.
July 19 (10 years ago)
Riley D.
Roll20 Team
A couple more tips (I am going to compile these into a wiki article when this is all said and done):

Don't Overuse setInterval/setTimeout

I have seen a few scripts that are using setInterval a lot with very short intervals (e.g. less than 100ms). First off, if at all possible your scripts should rely on events to track things. When you're using a setInterval function, that function runs even if absolutely nothing is happening in your game. So if at all possible, avoid those. If you absolutely must use them, please set the interval to something reasonable. Nothing happens in a pen and paper RPG game that needs to be tracked more frequently than once per second (e.g. 1000ms), and it can probably be set to even longer intervals. But consider that just changing it from 100ms to 1000ms, no person is going to notice the difference (heck, your lag to the server is something like 250ms on average), but the CPU will do 1/10th the amount of work! This is one of the main culprits I'm seeing in terms of using too much CPU.

And if you have several scripts all using setInterval, consider the total amount of things you have going on. 25+ setIntervals in a single campaign is simply too much.

Campaign Data Size Matters

The memory overage issues for the most part seem to be driven by the amount of data in the campaign. A good way to just "feel" this happening in your campaign is when you load it up to play it. Does it take 30+ seconds to load and feel like your whole computer is locking up? That's most likely due to the amount of data your computer is downloading and processing. Typically I see this either a) through a lot of pages (like 30+) in a campaign, or b) through a lot of Journal entries (Characters/Handouts), paired with a lot of attributes/abilities.

Keep in mind that the API is not able to ignore archived things the way your players can. So if you have a campaign with 30+ pages, even if half of them are archived, it's still going to eat through a lot of memory on the API side of things.

For the Characters/Handouts, one thing I've seen is that some folks are getting really carried away with the scripts that add attributes/abilities to Characters upon creation. First off, if it all possible please use the new character sheets system. It is way more efficient than adding attributes and abilities yourself, since it is one piece of data shared across all characters. Repeating data is always a bad idea. If you can't do that, keep in mind that you probably don't need 100+ characters to each have 100+ abilities and 100+ attributes. Those should really only be on main NPCs and the PCs. And if you do want access to all of those abilities, then consider global macros -- again, avoid repeating data whenever possible.

So take some care when using those on("add:character") scripts and make sure you aren't creating a ton of unnecessary data that is going to bog down your campaign.

---

Again, these issues are only affecting a handful of folks, but I thought I would point them out so others are aware of them. If you aren't receiving any CPU or memory errors in your API log, then you are fine, but it never hurts to know the things that are causing issues so they can be avoided in the future.

Thanks!
July 19 (10 years ago)
The Aaron
Roll20 Production Team
API Scripter
Regarding the setInterval()/setTimeout() running even when no one is in the game, is there a way from the API to detect that no one is connected and shut them down? Something like on('suspend:campaign',...) or something on the campaign object?
July 19 (10 years ago)
Riley D.
Roll20 Team

Aaron said:

Regarding the setInterval()/setTimeout() running even when no one is in the game, is there a way from the API to detect that no one is connected and shut them down? Something like on('suspend:campaign',...) or something on the campaign object?

The API automatically turns off your sandbox when there is no one is playing your game for 10 minutes. Of course, some folks end up leaving their browsers open 24/7 with their game loaded. But that's another issue.

What I meant was, I'm seeing setIntervals where it is scanning through the whole list of characters to check to see if X has happened, or to update Y variable. That scanning process is using the CPU. When you're doing that every 100ms, that's not ideal. Backing it off to 1000ms won't make much difference from a player perspective, but it will make a huge difference from a CPU perspective.

And then my other point was that "scanning" is not a super-efficient way to design something in the first place. If you are wanting to know when a character's attribute changes, instead of scanning every character every second to see, just add a listener onto change:attribute:attribute_name and you'll know every time it changes.