Roll20 uses cookies to improve your experience on our site. Cookies enable you to enjoy certain features, social sharing functionality, and tailor message and display ads to your interests on our site and others. They also help us understand how our site is being used. By continuing to use our site, you consent to our use of cookies. Update your cookie preferences .
×
Create a free account

Generator Functions for Async Handling?

I'm still new to Javascript, so I've only just learned about these fancy new generator functions, which are touted as being just the thing for handling asynchronous function queues. I've always wanted a way to add new async function calls to the end of a queue that was already running, so that when the queue got to its original end, it would spot a new function waiting to go and simply continue running.  Am I right in thinking that generator functions are able to do this? ( Here's a link to the article that explained all of this to me! )
1542675737

Edited 1542675941
Ammo
Pro
OMG that looks horrible. &nbsp;:) &nbsp; &nbsp;I think you don't have to go to that level of disgusting in order to do reasonable async programming in JavaScript. &nbsp; &nbsp;If you use Promises <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise</a> &nbsp;you can even make it look nice. In terms of a queue for Async work that will notice new work and execute in priority order, here is one I wrote (originally in TypeScript in my repo if you want that. <a href="https://github.com/derammo/der20/blob/master/src/der20/plugin/promise.ts" rel="nofollow">https://github.com/derammo/der20/blob/master/src/der20/plugin/promise.ts</a> ) &nbsp;Here it is compiled to JavaScript. &nbsp;&nbsp; To use it, just create how ever many priority levels you want via 'createPriorityLevel' and use the numbers returned from those calls as priority values for scheduleWork(...) The particularly cute thing about this queue is that if you work with an API that already returns Promise objects to you, you can put these in the queue via trackPromise(...) so that it knows about them and won't let lower priority work skip ahead of them. I use this queue to make sure my API scripts don't start another command execution while there are still async operations running from the previous one, or reconfigurations going on etc. class Task { } class Priority { constructor () { this . name = 'unnamed queue' ; this . concurrency = 1 ; this . running = 0 ; this . limit = 1024 ; this . waiting = []; } } class PromiseQueue { constructor ( name ) { this . levels = []; } createPriorityLevel ( options ) { let queue = new Priority (); if ( options !== undefined ) { Object . assign ( queue , options ); } this . levels . push ( queue ); return this . levels . length - 1 ; } scheduleWork ( priority , work , handler ) { let task = new Task (); task . work = work ; task . handler = handler ; this . schedule ( priority , task ); } // NOTE: we already have the promise from some other API so it is actually already running, but // we don't consider it as running until we wait for its result and we run its handler // within the concurrency limit trackPromise ( priority , promise , handler ) { let task = new Task (); task . promise = promise ; task . handler = handler ; this . schedule ( priority , task ); } // cancel all outstanding work, leaving current work running cancel () { for ( let queue of this . levels ) { queue . waiting = []; } } schedule ( priority , task ) { let queue = this . levels [ priority ]; if ( this . mayRun ( priority )) { debug . log ( `scheduler debug: immediately executing task of level ' ${ queue . name } '` ); this . run ( queue , task ); } else { if ( queue . waiting . length &gt;= queue . limit ) { throw new Error ( `queue limit of ${ queue . limit } for queue ${ queue . name } was exceeded; system may be deadlocked` ); } debug . log ( `scheduler debug: scheduling task on queue ' ${ queue . name } '` ); queue . waiting . push ( task ); } } run ( queue , task ) { queue . running ++; if ( task . promise === undefined ) { if ( task . work === undefined ) { throw new Error ( 'work function must be specified for task that does not already have a promise attached' ); } debug . log ( `scheduler debug: executing work from queue ' ${ queue . name } '` ); try { task . promise = task . work (); } catch ( error ) { console . log ( `error caught from work function on queue ' ${ queue . name } ': ${ error . message } ` ); task . promise = Promise . reject ( error ); // notify external error observer, if any if ( this . errorHandler !== undefined ) { this . errorHandler ( error ); } } } task . promise . then ( value =&gt; { if ( task . handler !== undefined ) { debug . log ( `scheduler debug: calling result handler for task from queue ' ${ queue . name } '` ); task . handler ( value ); } }) . catch (( error ) =&gt; { console . log ( `failed asynchronous work on queue ' ${ queue . name } ': ${ error . message } ` ); // notify external error observer, if any if ( this . errorHandler !== undefined ) { this . errorHandler ( error ); } }) . then (() =&gt; { queue . running --; this . update (); }); } mayRun ( priority ) { let queue = this . levels [ priority ]; if ( queue . running &gt;= queue . concurrency ) { // this queue is executing at limit return false ; } for ( let scan = priority - 1 ; scan &gt;= 0 ; scan --) { if ( this . levels [ scan ]. running &gt; 0 ) { // can't execute ahead of more urgent work return false ; } if ( this . levels [ scan ]. waiting . length &gt; 0 ) { // there must be even more urgent work keeping these waiting return false ; } } return true ; } // find one item to run, if there is one update () { for ( let queue of this . levels ) { if ( queue . waiting . length + queue . running === 0 ) { // nothing at this level, allowed to check next level continue ; } if ( queue . running &gt;= queue . concurrency ) { // this level is executing at limit, so nothing may execute return ; } if ( queue . waiting . length === 0 ) { // nothing new to run, but we have tasks running at this level, so nothing may execute return ; } this . run ( queue , queue . waiting . shift ()); return ; } } } });
edit: &nbsp;had to edit out a dependency, should be fine now. &nbsp;Anyway, the point is if you look at the execution sequence in run(...) you will see that every async item executes, handles exceptions, and then updates the queue(s). &nbsp; So if new work has been queued since you started, it will be found at that point. &nbsp; If you ever run out of work, then it sits there dormant. &nbsp;That's why schedule(...) has to check if the queues are empty and run the item immediately if so, to kick the chain back off. &nbsp;This is a very classic pattern for making a queue of async items. &nbsp;This is the most complicated and complete implementation of such a thing (I think) so you could go way simpler if you don't need priority levels.