/*****************************************************************************/ /* Throttle.c Request "throttling" is a term adopted to describe controlling the number of concurrent requests that can be processing against any specified path at any one time. Requests in excess of this value are then FIFO queued up to an optional limit waiting for a being-processed request to conclude allowing the next queued request to continue processing again. This is primarily intended to limit concurrent resource-intensive script execution but could be applied to *any* resource path. Here's one dictionary description. throttle n 1: a valve that regulates the supply of fuel to the engine [syn: accelerator, throttle valve] 2: a pedal that controls the throttle valve; "he stepped on the gas" [syn: accelerator, accelerator pedal, gas pedal, gas, gun] v 1: place limits on; "restrict the use of this parking lot" [syn: restrict, restrain, trammel, limit, bound, confine] 2: squeeze the throat of; "he tried to strangle his opponent" [syn: strangle, strangulate] 3: reduce the air supply; of carburetors [syn: choke] This is applied to a path (or paths) using the mapping SET THROTTLE= rule. This rule allows a maximum concurrent number of requests to be specified, and optionally a maximum number of queued requests. When the maximum queued requests is exceeded the client receives a 503 (server too busy) status. Empty or zero parameters may be included between the commas. throttle=from[/per-user][,to,resume,busy,timeout-queue,timeout-busy] throttle=n1[/u1][,n2,n3,n4,to1,to2] o from (n1) concurrent requests before queuing begins o per-user (u1) concurrent requests per (authenticated) user o to (n2) queuing continues up to this value, when the queue FIFOs o resume (n3) FIFO continues to this value, where queuing begins again o busy (n4) absolute maximum concurrent requests before 503 "busy" o t/o-queue (to1) period before a queued request is processed o t/o-busy (to2) period before a queued request is 503 "busy"ed When a throttle rule is loaded it is checked for "sensible" values. Basically this means that each successive value is larger than it's predecessor. DESCRIPTION ----------- o If 'from' (n1) is exceeded then begin to queue. Actively processing requests does not increase, queue length does. o If 'per-user' (u1) is non-zero the value regulates the number of concurrently processing requests for any one authenticated user. Even though the 'from' value may allow processing if 'per-user' would be exceeded the request is queued. o If 'to' (n2) is specified and exceeded then begin to FIFO requests from the queue into processing. Queue length does not increase (new are being put onto the queue, previous being taken off of the other end of the queue), but number processing now begins to increase. o If 'resume' (n3) is specified this acts as an absolute control on all request PROCESSING associated with the path. After this value the number of requests actively being processed does not increase but queue length again does. o If 'busy' (n4) is exceeded ALWAYS immediately generate a 503 "busy". o If 'timeout-queue' (to1) is specified this causes queued requests exceeding the period to be FIFOed from the queue into processing unless the 'resume' (n3) limit would be exceeded. If this would be exceeded they remain in the queue (potentially indefinitely, or until they FIFO off the queue, or until timeout-busy (to2) occurs). o If 'timeout-busy' (to2) is specified queued requests exceeding the period are immediately terminated with a 503 "busy" status. A 'timeout-busy' only begins after the expiry of any 'timeout-queue'. PER-USER THROTTLING ------------------- If the concurrent processing value ('from') has a second, slash-delimited integer, this serves to limit the number of authenticated user-associated requests that can be concurrently processing. throttle=n1/u1[,n2,n3,n4,to1,to2] When a request is available for processing the associated remote user name is checked for activity against the queue. The 'u1' (or user throttle value) is a limit on that user name's concurrent processing. If it would exceed the specified value the request is queued until the number of requests processing drops below the 'u1' value. All other values in the throttle rule are applied as for non-per-user throttling. NOTE: the user name used for comparison purposes is the authenticated remote user (same as the CGI variable value REMOTE_USER). This can be for any realm. Of course the same string can be used to represent different users within different authentication realms and so care should be exercised that per-user throttling does not span realms otherwise unexpected (and incorrect) throttling may occur for distinct users. If an unauthenticated request is matched against the throttle rule (i.e. there is no authorization rule matching the request path) the client has a 500 (server error) response returned. Obviously per-user throttling must have a remote user name to throttle against and this is a configuration issue. EXAMPLES -------- 1) throttle=10 Requests up to 10 are concurrently processed. When 10 is reached further requests are queued to server capacity. 2) throttle=10,20 Concurrent requests to 10 are processed immediately. From 11 to 20 requests are queued. After 20 all requests are queued but also result in a request FIFOing off the queue to be processed (queue length is static, number being processed increases to server capacity). 3) throttle=15,30,40 Concurrent requests up to 15 are immediately processed. Requests 16 through to 30 are queued, while 31 to 40 requests result in the new requests being queued and waiting requests being FIFOed into processing. Concurrent requests from 41 onwards are again queued, in this scenario to server capacity. 4) throttle=10,20,30,40 Concurrent requests up to 10 are immediately processed. Requests 11 through to 20 will be queued. Concurrent requests from 21 to 30 are queued too, but at the same time waiting requests are FIFOed from the queue (resulting in 10 (n1) + 10 (n3-n2) = 20 being processed). From 31 onwards requests are just queued. Up to 40 concurrent requests may be against the path before all new requests are immediately returned with a 503 "busy" status. With this scenario no more than 20 can be concurrently processed with 20 concurrently queued. 5) throttle=10,,,30 Concurrent requests up to 10 are processed. When 10 is reached requests are queued up to request 30. When request 31 arrives it is immediately given a 503 "busy" status. 6) throttle=10,20,30,40,00:02:00 This is basically the same as scenario 4) but with a resume-on-timeout of two minutes. If there are currently 15 (or 22 or 28) requests (n1 exceeded, n3 still within limit) the queued requests will begin processing on timeout. Should there be 32 processing (n3 has reached limit) the request will continue to sit in the queue. The timeout would not be reset. 7) throttle=15,30,40,,,00:03:00 This is basically the same as scenario 3) but with a busy-on-timeout of three minutes. When the timeout expires the request is immediately dequeued with a 503 "busy" status. 8) throttle=10/1 Concurrent requests up to 10 are processed. The requests must be of authenticated users. Each authenticated user is allowed to execute at most one concurrent request against this path. When 10 is reached, or if less than 10 users are currently executing requests, then further requests are queued to server capacity. 9) throttle=10/1,,,,,00:03:00 This is basically the same as scenario 8) but with a busy-on-timeout of three minutes. When the timeout expires any requests still queued against the user name is immediately dequeued with a 503 "busy" status. IMPLEMENTATION -------------- Request throttling is implemented by the MAPURL.C rule loading providing the SET THROTTLE= rule parameters against the path, AS WELL AS counting the number of such paths in the rules. This number is stored along with the maxima and is used as an index into, as well as to set up, a dynamically allocated array of structures used to support the concurrent usage tracking and queuing against that particular path. Per-user throttling is accomplished by maintaining a list of per-user data structures associated with each throttle rule that is used to track the number of requests currently processing and queued against each authenticated user name attempting to access the path. The list is searched each time the fundamental throttle structure needs to decide on processing/queueing a request. This list expands dynamically against demand and is periodically garbage-collected when the underlying throttle structure is quiescent. When path mapping SETs a throttle maximum against the request's path the associated index number is used to select the corresponding element of the usage structure array. Simple and reasonably efficient. RULE RELOADING could present a problem with this schema ... but doesn't. The array can grow in size to accomodate a new rule load with additional throttles, but will never shrink. (being based on an array index, not a pointer makes this possible). This allows existing requests with buffered indices to (somewhat) correctly access the array and be (somewhat) correctly processed. If the actual paths represented by array elements change there may be some "confusion" in what the processing and queuing represents. This could possibly result in some resources temporarily being inappropriately throttled but this gradually filters out of the functionality as associated requests conclude. There are no fatal implications (that I can see, anyway) in this scheme. VERSION HISTORY --------------- 22-MAR-2014 MGD add accounting throttle totals (supports WASDmon) 18-SEP-2006 MGD bugfix; ThrottleReport() column alignment of 'busy' and 'total' percentages in second row of per-path statistics 04-JUL-2006 MGD use PercentOf() for more accurate percentages 25-MAY-2005 MGD ThrottleControl() provide TERMINATE/RELEASE selected on username or script name 19-MAY-2005 MGD per-user throttling 06-OCT-2004 MGD reset rqPathSet.ThrottleSet appropriately 20-JUL-2003 MGD revise reporting format 04-AUG-2001 MGD support module WATCHing, fix end throttle call to RequestExecutePostThrottle() 18-MAY-2001 MGD bugfix; exceeding the point where we should start to FIFO (jfp@altavista.com) 08-MAY-2001 MGD modify throttle parameter meanings and functionality 08-APR-2001 MGD add 'queue-length' to throttling 13-MAR-2001 MGD initial development */ /*****************************************************************************/ #ifdef WASD_VMS_V7 #undef _VMS__V6__SOURCE #define _VMS__V6__SOURCE #undef __VMS_VER #define __VMS_VER 70000000 #undef __CRTL_VER #define __CRTL_VER 70000000 #endif /* standard C header files */ #include #include #include #include #include /* VMS related header files */ #include #include #include /* application header files */ #include "wasd.h" #define WASD_MODULE "THROTTLE" /* reset the request 'path set' data */ #define REQUEST_RESET_THROTTLE(rqptr) \ rqptr->rqPathSet.ThrottleSet = false; \ rqptr->rqPathSet.ThrottleBusy = \ rqptr->rqPathSet.ThrottleFrom = \ rqptr->rqPathSet.ThrottleIndex = \ rqptr->rqPathSet.ThrottlePerUser = \ rqptr->rqPathSet.ThrottleResume = \ rqptr->rqPathSet.ThrottleTo = \ rqptr->rqPathSet.ThrottleTimeoutBusy = \ rqptr->rqPathSet.ThrottleTimeoutQueue = 0; /******************/ /* global storage */ /******************/ int ThrottleBusyMetricTotal, ThrottleBusyMetricTotal503, ThrottleTotal; THROTTLE_STRUCT *ThrottleArray; /********************/ /* external storage */ /********************/ #ifdef DBUG extern BOOL Debug; #else #define Debug 0 #endif extern int InstanceNumber; extern char ErrorSanityCheck[], ServerHostPort[], SoftwareID[]; extern ACCOUNTING_STRUCT *AccountingPtr; extern CONFIG_STRUCT Config; extern MAPPING_META *MappingMetaPtr; extern MSG_STRUCT Msgs; extern WATCH_STRUCT Watch; /*****************************************************************************/ /* (Re)initialize the global throttle structure array. This will not upset any per-user structure lists pointed at because it's an effective realloc() preserving the data already present. */ ThrottleInit () { int idx; THROTTLE_STRUCT *tsptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_THROTTLE)) WatchThis (WATCHALL, WATCH_MOD_THROTTLE, "ThrottleInit() !UL !UL", ThrottleTotal, MappingMetaPtr->ThrottleTotal); if (ThrottleTotal < MappingMetaPtr->ThrottleTotal) ThrottleTotal = MappingMetaPtr->ThrottleTotal; if (!ThrottleTotal) return (SS$_NORMAL); ThrottleArray = (THROTTLE_STRUCT*) VmRealloc (ThrottleArray, ThrottleTotal*sizeof(THROTTLE_STRUCT), FI_LI); /* in case this is a mapping rule reload reset all the counters */ ThrottleZero (); } /*****************************************************************************/ /* Zero the accumulators associated with the throttle structure array. */ ThrottleZero () { int idx; THROTTLE_STRUCT *tsptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_THROTTLE)) WatchThis (WATCHALL, WATCH_MOD_THROTTLE, "ThrottleZero()"); for (idx = 0; idx < ThrottleTotal; idx++) { tsptr = &ThrottleArray[idx]; tsptr->MaxProcessingCount = tsptr->MaxQueuedCount = tsptr->Total503Count = tsptr->TotalCount = tsptr->TotalFiFoCount = tsptr->TotalQueuedCount = tsptr->TotalTimeoutBusyCount = tsptr->TotalTimeoutQueueCount = 0; } ThrottleMonitorReset (); } /*****************************************************************************/ /* Called by HttpdTick() each minute or when there is no more server activity. */ ThrottleMonitorReset () { int idx; THROTTLE_STRUCT *tsptr; THROTTLE_PER_USER_STRUCT *tpuptr, *nextuser; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_THROTTLE)) WatchThis (WATCHALL, WATCH_MOD_THROTTLE, "ThrottleMonitorReset()"); InstanceMutexLock (INSTANCE_MUTEX_HTTPD); AccountingPtr->ThrottleBusyMetric = ThrottleBusyMetricTotal = ThrottleBusyMetricTotal503 = 0; AccountingPtr->CurrentThrottleProcessing[InstanceNumber] = AccountingPtr->CurrentThrottleQueued[InstanceNumber] = 0; /* zeroing upsets the 'currently' data, recalculate just in case */ for (idx = 0; idx < ThrottleTotal; idx++) { tsptr = &ThrottleArray[idx]; AccountingPtr->CurrentThrottleProcessing[InstanceNumber] += tsptr->CurrentProcessingCount; AccountingPtr->CurrentThrottleQueued[InstanceNumber] += tsptr->CurrentQueuedCount; /* garbage-collect per-user structures on a quiescent throttle path */ if (!tsptr->CurrentProcessingCount && !tsptr->CurrentQueuedCount) { tpuptr = tsptr->FirstUserPtr; while (tpuptr) { if (tpuptr->CurrentQueuedCount || tpuptr->CurrentProcessingCount) ErrorNoticed (NULL, SS$_BUGCHECK, ErrorSanityCheck, FI_LI); nextuser = tpuptr->NextUserPtr; VmFree (tpuptr, FI_LI); tpuptr = nextuser; } tsptr->FirstUserPtr = NULL; tsptr->CurrentPerUserCount = 0; } } InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); } /*****************************************************************************/ /* The request path having a maximum concurrent request limit set has resulted in a call to this function before actually processing the request. If the request is stalled by queueing then return SS$_RETRY, if it can continue immediately then return SS$_CONTINUE. See description at the beginning of this module for an explanation of the algroithm used in this function. */ int ThrottleBegin (REQUEST_STRUCT *rqptr) { BOOL ProcessRequest, QueueRequest; char *cptr; double fScratch; THROTTLE_STRUCT *tsptr, *root_tsptr; THROTTLE_PER_USER_STRUCT *tpuptr; /*********/ /* begin */ /*********/ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "ThrottleBegin()"); ThrottleBusyMetricTotal++; if (ThrottleBusyMetricTotal503) { /* must update the metric here to potentially reduce the metric */ fScratch = (double)ThrottleBusyMetricTotal503 * 100.0 / (double)ThrottleBusyMetricTotal; InstanceMutexLock (INSTANCE_MUTEX_HTTPD); AccountingPtr->ThrottleBusyMetric = (int)fScratch; if (modf (fScratch, &fScratch) >= 0.5) AccountingPtr->ThrottleBusyMetric++; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); } tsptr = &ThrottleArray[rqptr->rqPathSet.ThrottleIndex]; if (rqptr->rqPathSet.ThrottlePerUser) { /************/ /* per-user */ /************/ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "per-user:!AZ", rqptr->RemoteUser); if (!rqptr->RemoteUser[0]) { if (WATCHING (rqptr, WATCH_REQUEST)) WatchThis (WATCHITM(rqptr), WATCH_REQUEST, "THROTTLE per-user NO USER (authorization)"); REQUEST_RESET_THROTTLE(rqptr) rqptr->rqResponse.HttpStatus = 500; ErrorGeneral (rqptr, MsgFor(rqptr,MSG_HTTP_500), FI_LI); return (SS$_ABORT); } /* note that this is being throttled on a per-user basis */ rqptr->ThrottlePerUser = true; /* look for a per-user entry */ tpuptr = tsptr->FirstUserPtr; while (tpuptr) { if (!strcmp (rqptr->RemoteUser, tpuptr->RemoteUser)) break; tpuptr = tpuptr->NextUserPtr; } if (!tpuptr) { /* didn't find one, look for a currently unused entry */ tpuptr = tsptr->FirstUserPtr; while (tpuptr) { if (!tpuptr->CurrentQueuedCount && !tpuptr->CurrentProcessingCount) break; tpuptr = tpuptr->NextUserPtr; } if (tpuptr) { /* initialize the reused entry data */ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "reused"); strzcpy (tpuptr->RemoteUser, rqptr->RemoteUser, sizeof(tpuptr->RemoteUser)); tpuptr->CurrentQueuedCount = tpuptr->CurrentProcessingCount = tpuptr->TotalCount = 0; } } if (!tpuptr) { /* didn't find one, add a new entry to the list */ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "new"); tpuptr = (THROTTLE_PER_USER_STRUCT*)VmGet (sizeof(THROTTLE_PER_USER_STRUCT)); /* insert the new entry at the head of the list */ tpuptr->NextUserPtr = tsptr->FirstUserPtr; tsptr->FirstUserPtr = tpuptr; /* initialize the new entry data */ strzcpy (tpuptr->RemoteUser, rqptr->RemoteUser, sizeof(tpuptr->RemoteUser)); tsptr->CurrentPerUserCount++; if (tsptr->CurrentPerUserCount > tsptr->MaxPerUserCount) tsptr->MaxPerUserCount = tsptr->CurrentPerUserCount; } tpuptr->TotalCount++; } else { rqptr->ThrottlePerUser = false; tpuptr = NULL; } /******************/ /* check throttle */ /******************/ tsptr->TotalCount++; /* throttle=from[/user],to,resume,busy,t/o-queue,t/o-busy */ ProcessRequest = QueueRequest = false; /* if it can be processed immediately */ if (tsptr->CurrentProcessingCount < rqptr->rqPathSet.ThrottleFrom) ProcessRequest = true; else /* if it can be queued */ if (!rqptr->rqPathSet.ThrottleBusy || tsptr->CurrentQueuedCount + tsptr->CurrentProcessingCount < rqptr->rqPathSet.ThrottleBusy) { /* queue it up, perhaps we'll also be FIFOing */ QueueRequest = true; /* if exceeding the point where we should start to FIFO */ if (rqptr->rqPathSet.ThrottleTo && tsptr->CurrentQueuedCount >= rqptr->rqPathSet.ThrottleTo - rqptr->rqPathSet.ThrottleFrom) { /* if still under any limit imposed on FIFO processing */ if (!rqptr->rqPathSet.ThrottleResume || tsptr->CurrentProcessingCount < (rqptr->rqPathSet.ThrottleResume - rqptr->rqPathSet.ThrottleTo) + rqptr->rqPathSet.ThrottleFrom) ProcessRequest = true; } } if (WATCHING (rqptr, WATCH_REQUEST)) { /*********/ /* watch */ /*********/ if (ProcessRequest && QueueRequest) cptr = "->QUEUE->"; else if (ProcessRequest) cptr = "PROCESS"; else if (QueueRequest) cptr = "->QUEUE"; else cptr = "BUSY"; WatchThis (WATCHITM(rqptr), WATCH_REQUEST, "THROTTLE set:!UL,!UL,!UL,!UL current:!UL,!UL !AZ", rqptr->rqPathSet.ThrottleFrom, rqptr->rqPathSet.ThrottleTo, rqptr->rqPathSet.ThrottleResume, rqptr->rqPathSet.ThrottleBusy, tsptr->CurrentProcessingCount, tsptr->CurrentQueuedCount, cptr); } if (ProcessRequest && tpuptr) { /******************/ /* check per-user */ /******************/ if (tpuptr->CurrentProcessingCount >= rqptr->rqPathSet.ThrottlePerUser) { ProcessRequest = false; QueueRequest = true; } if (WATCHING (rqptr, WATCH_REQUEST)) { if (ProcessRequest) cptr = "PROCESS"; else cptr = "->QUEUE"; WatchThis (WATCHITM(rqptr), WATCH_REQUEST, "THROTTLE user:!AZ per:!UL current:!UL,!UL !AZ", tpuptr->RemoteUser, rqptr->rqPathSet.ThrottlePerUser, tpuptr->CurrentProcessingCount, tpuptr->CurrentQueuedCount, cptr); } } /* this queue code section must precede the process code section */ if (QueueRequest) { /*********/ /* queue */ /*********/ tsptr->TotalQueuedCount++; tsptr->CurrentQueuedCount++; if (tsptr->CurrentQueuedCount > tsptr->MaxQueuedCount) tsptr->MaxQueuedCount = tsptr->CurrentQueuedCount; if (tpuptr) tpuptr->CurrentQueuedCount++; InstanceMutexLock (INSTANCE_MUTEX_HTTPD); AccountingPtr->ThrottleTotalQueued++; AccountingPtr->CurrentThrottleQueued[InstanceNumber]++; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); /* the list entry data becomes a pointer to the request structure */ rqptr->ThrottleListEntry.DataPtr = (void*)rqptr; /* add entry to the tail of the waiting list (FIFO) */ ListAddTail (&tsptr->QueuedList, &rqptr->ThrottleListEntry, LIST_ENTRY_TYPE_THROTTLE); if (rqptr->rqPathSet.ThrottleTimeoutQueue || rqptr->rqPathSet.ThrottleTimeoutBusy) HttpdTimerSet (rqptr, TIMER_THROTTLE, 0); /* if this was not a FIFO operation then return here */ if (!ProcessRequest) return (SS$_RETRY); /****************/ /* FIFO process */ /****************/ /* release the head of the queued requests (note reuse of 'rqptr') */ rqptr = (REQUEST_STRUCT*)(tsptr->QueuedList.HeadPtr->DataPtr); ThrottleRelease (rqptr, NULL, true); return (SS$_RETRY); } /* this process code section must follow the queue code section */ if (ProcessRequest) { /***********/ /* process */ /***********/ tsptr->CurrentProcessingCount++; if (tsptr->CurrentProcessingCount > tsptr->MaxProcessingCount) tsptr->MaxProcessingCount = tsptr->CurrentProcessingCount; if (tpuptr) tpuptr->CurrentProcessingCount++; InstanceMutexLock (INSTANCE_MUTEX_HTTPD); AccountingPtr->ThrottleTotalProcessed++; AccountingPtr->CurrentThrottleProcessing[InstanceNumber]++; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); return (SS$_CONTINUE); } /************/ /* too busy */ /************/ REQUEST_RESET_THROTTLE(rqptr) tsptr->Total503Count++; rqptr->rqResponse.HttpStatus = 503; ErrorGeneral (rqptr, MsgFor(rqptr,MSG_GENERAL_TOO_BUSY), FI_LI); /* update to increase the metric */ ThrottleBusyMetricTotal503++; fScratch = (double)ThrottleBusyMetricTotal503 * 100.0 / (double)ThrottleBusyMetricTotal; InstanceMutexLock (INSTANCE_MUTEX_HTTPD); AccountingPtr->ThrottleTotalBusy++; AccountingPtr->ThrottleBusyMetric = (int)fScratch; if (modf (fScratch, &fScratch) >= 0.5) AccountingPtr->ThrottleBusyMetric++; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); return (SS$_ABORT); } /*****************************************************************************/ /* The request path having a maximum concurrent request limit set has resulted in a call to this function at the end of processing the request. Adjust the concurrent processing counter and check if there are any requests queued waiting for processing. If not just return. If there is/are then remove the front of the list (FIFO) and call the AST function address stored when it was originally queued to continue processing. See description above for further detail on "throttling" request processing. */ ThrottleEnd (REQUEST_STRUCT *rqptr) { int status; LIST_ENTRY *leptr; THROTTLE_STRUCT *tsptr; THROTTLE_PER_USER_STRUCT *tpuptr; /*********/ /* begin */ /*********/ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "ThrottleEnd() !&F", &ThrottleEnd); tsptr = &ThrottleArray[rqptr->rqPathSet.ThrottleIndex]; if (rqptr->ThrottlePerUser) { /* look for the per-user entry */ tpuptr = tsptr->FirstUserPtr; while (tpuptr) { if (!strcmp (rqptr->RemoteUser, tpuptr->RemoteUser)) break; tpuptr = tpuptr->NextUserPtr; } if (!tpuptr) ErrorNoticed (rqptr, SS$_BUGCHECK, ErrorSanityCheck, FI_LI); } else tpuptr = NULL; /* careful, these counters can get scrambled during a mapping reload */ if (tsptr->CurrentProcessingCount) tsptr->CurrentProcessingCount--; if (tpuptr && tpuptr->CurrentProcessingCount) tpuptr->CurrentProcessingCount--; InstanceGblSecDecrLong (&AccountingPtr->CurrentThrottleProcessing); if (WATCHING (rqptr, WATCH_REQUEST)) { if (tpuptr) WatchThis (WATCHITM(rqptr), WATCH_REQUEST, "THROTTLE user:!AZ per:!UL current:!UL,!UL END", tpuptr->RemoteUser, rqptr->rqPathSet.ThrottlePerUser, tpuptr->CurrentProcessingCount, tpuptr->CurrentQueuedCount); WatchThis (WATCHITM(rqptr), WATCH_REQUEST, "THROTTLE set:!UL,!UL,!UL,!UL current:!UL,!UL END", rqptr->rqPathSet.ThrottleFrom, rqptr->rqPathSet.ThrottleTo, rqptr->rqPathSet.ThrottleResume, rqptr->rqPathSet.ThrottleBusy, tsptr->CurrentProcessingCount, tsptr->CurrentQueuedCount); } REQUEST_RESET_THROTTLE(rqptr) /* if there are no requests waiting to be made active */ if (!tsptr->CurrentQueuedCount) return; /********************/ /* find one to FIFO */ /********************/ if (rqptr->ThrottlePerUser) { /************/ /* per-user */ /************/ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "per-user"); /* scan the list of throttled requests */ leptr = tsptr->QueuedList.HeadPtr; while (leptr) { /* note the continued REUSE of 'rqptr'! */ rqptr = (REQUEST_STRUCT*)leptr->DataPtr; /* look for this per-user entry and check it's processing count */ tpuptr = tsptr->FirstUserPtr; while (tpuptr) { if (!strcmp (rqptr->RemoteUser, tpuptr->RemoteUser)) { /* set to null if this user will exceed processing limit */ if (tpuptr->CurrentProcessingCount >= rqptr->rqPathSet.ThrottlePerUser) tpuptr = NULL; /* have hit the user so drop this search */ break; } /* go to the next per-user entry */ tpuptr = tpuptr->NextUserPtr; } /* if found a user entry that could use some more processing */ if (tpuptr) break; /* go to the next throttled request */ leptr = leptr->NextPtr; } if (!leptr) tpuptr = NULL; if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "!&B!-!&? \r\r!AZ", tpuptr, tpuptr ? rqptr->RemoteUser : ""); /**************************/ /* if none could be found */ /**************************/ if (!tpuptr) return; } else { /* just FIFO the head of list, note the REUSE of 'rqptr'! */ rqptr = (REQUEST_STRUCT*)(tsptr->QueuedList.HeadPtr->DataPtr); tpuptr = NULL; } /**************/ /* release it */ /**************/ ListRemove (&tsptr->QueuedList, &rqptr->ThrottleListEntry); /* if no process elbow-room (should only be after sysadmin intervention) */ if (rqptr->rqPathSet.ThrottleResume && tsptr->CurrentProcessingCount >= (rqptr->rqPathSet.ThrottleResume - rqptr->rqPathSet.ThrottleTo) + rqptr->rqPathSet.ThrottleFrom) return; /* release the head of the queued requests (FIFO) */ ThrottleRelease (rqptr, tpuptr, true); } /*****************************************************************************/ /* The HTTPd supervisor has called this function on a throttle timeout. If it's a "process" timeout then, then check if there is an absolute limit on concurrently processed requests, if not then just send it on it's way to the previously buffered next function, otherwise it's a 503 "busy". If it a "busy" timeout generate a 503 "busy" and the send it to request run-down! */ ThrottleTimeout (REQUEST_STRUCT *rqptr) { LIST_ENTRY *eptr; THROTTLE_STRUCT *tsptr; /*********/ /* begin */ /*********/ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "ThrottleTimeout()"); tsptr = &ThrottleArray[rqptr->rqPathSet.ThrottleIndex]; if (WATCHING (rqptr, WATCH_REQUEST)) WatchThis (WATCHITM(rqptr), WATCH_REQUEST, "THROTTLE timeout !AZ", rqptr->rqPathSet.ThrottleTimeoutQueue ? "QUEUE" : "BUSY"); if (rqptr->rqPathSet.ThrottleTimeoutQueue) { /**********************/ /* timeout to process */ /**********************/ tsptr->TotalTimeoutQueueCount++; /* if to be processed but limited by absolute maximum on processing */ if (rqptr->rqPathSet.ThrottleResume && tsptr->CurrentProcessingCount >= (rqptr->rqPathSet.ThrottleResume - rqptr->rqPathSet.ThrottleTo) + rqptr->rqPathSet.ThrottleFrom) { /* can't begin processing, just sit and wait on the queue */ if (rqptr->rqPathSet.ThrottleTimeoutBusy) { /* this time do not use the process timeout */ rqptr->rqPathSet.ThrottleTimeoutQueue = 0; HttpdTimerSet (rqptr, TIMER_THROTTLE, 0); } else HttpdTimerSet (rqptr, TIMER_OUTPUT, 0); return; } /* remove the entry from the queue to begin processing */ ThrottleRelease (rqptr, NULL, true); return; } /*******************/ /* timeout to busy */ /*******************/ tsptr->TotalTimeoutBusyCount++; /* remove the entry from the queue to be terminated with 503 "busy" */ ThrottleRelease (rqptr, NULL, false); } /*****************************************************************************/ /* Remove the request specified by 'rqptr' from the appropriate throttled path's queue. If 'ToProcess' declare an AST to recommence it's processing. If to be taken off the queue and discarded generate a 503 "busy" status explicitly call RequestEnd() as an AST. By the time this function is called it has been decided whether to process or terminate the particular request. */ ThrottleRelease ( REQUEST_STRUCT *rqptr, THROTTLE_PER_USER_STRUCT *tpuptr, BOOL ToProcess ) { THROTTLE_STRUCT *tsptr; /*********/ /* begin */ /*********/ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "ThrottleRelease()"); tsptr = &ThrottleArray[rqptr->rqPathSet.ThrottleIndex]; /* ThrottleEnd() can supply the 'tpuptr' saving a search here */ if (!tpuptr && rqptr->ThrottlePerUser) { /* look for a per-user entry */ tpuptr = tsptr->FirstUserPtr; while (tpuptr) { if (!strcmp (rqptr->RemoteUser, tpuptr->RemoteUser)) break; tpuptr = tpuptr->NextUserPtr; } if (!tpuptr) ErrorNoticed (rqptr, SS$_BUGCHECK, ErrorSanityCheck, FI_LI); } if (WATCHING (rqptr, WATCH_REQUEST)) { if (tpuptr) WatchThis (WATCHITM(rqptr), WATCH_REQUEST, "THROTTLE user:!AZ per:!UL current:!UL,!UL !AZ", tpuptr->RemoteUser, rqptr->rqPathSet.ThrottlePerUser, tpuptr->CurrentProcessingCount, tpuptr->CurrentQueuedCount, ToProcess ? "PROCESS" : "BUSY"); WatchThis (WATCHITM(rqptr), WATCH_REQUEST, "THROTTLE set:!UL,!UL,!UL,!UL current:!UL,!UL !AZ", rqptr->rqPathSet.ThrottleFrom, rqptr->rqPathSet.ThrottleTo, rqptr->rqPathSet.ThrottleResume, rqptr->rqPathSet.ThrottleBusy, tsptr->CurrentProcessingCount, tsptr->CurrentQueuedCount, ToProcess ? "PROCESS" : "BUSY"); } ListRemove (&tsptr->QueuedList, &rqptr->ThrottleListEntry); /* careful, these counters can get scrambled during a mapping reload */ if (tsptr->CurrentQueuedCount) tsptr->CurrentQueuedCount--; if (tpuptr && tpuptr->CurrentQueuedCount) tpuptr->CurrentQueuedCount--; InstanceGblSecDecrLong (&AccountingPtr->CurrentThrottleQueued); if (ToProcess) { /***********/ /* process */ /***********/ /* declare an AST for the next function to be performed */ SysDclAst (&RequestExecutePostThrottle, rqptr); tsptr->TotalFiFoCount++; tsptr->CurrentProcessingCount++; if (tsptr->CurrentProcessingCount > tsptr->MaxProcessingCount) tsptr->MaxProcessingCount = tsptr->CurrentProcessingCount; if (tpuptr) tpuptr->CurrentProcessingCount++; InstanceGblSecIncrLong (&AccountingPtr->CurrentThrottleProcessing); } else { /********/ /* busy */ /********/ REQUEST_RESET_THROTTLE(rqptr) tsptr->Total503Count++; rqptr->rqResponse.HttpStatus = 503; ErrorGeneral (rqptr, MsgFor(rqptr,MSG_GENERAL_TOO_BUSY), FI_LI); /* declare an AST to run-down the request */ SysDclAst (&RequestEnd, rqptr); } /* reinitialize the timer for output */ HttpdTimerSet (rqptr, TIMER_OUTPUT, 0); /* indicate it's no longer queued */ rqptr->ThrottleListEntry.DataPtr = NULL; } /*****************************************************************************/ /* Scan through all throttle structures looking for those with queued requests. Either release all the queued requests for processing, or just release the one with the specified connect number. This "release" is completely unconditional. That is a non-extreme-prejudice release sets requests processing regardless of any processing limitations in the throttle rules!! Return the number of dequeued requests. */ int ThrottleControl ( BOOL WithExtremePrejudice, int ConnectNumber, char *RemoteUser, char *ScriptName ) { int idx, status, DequeuedCount; LIST_ENTRY *leptr; REQUEST_STRUCT *rqptr; THROTTLE_STRUCT *tsptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_THROTTLE)) WatchThis (WATCHALL, WATCH_MOD_THROTTLE, "ThrottleControl() !UL !&Z !&Z", WithExtremePrejudice, RemoteUser, ScriptName); DequeuedCount = 0; for (idx = 0; idx < MappingMetaPtr->ThrottleTotal; idx++) { tsptr = &ThrottleArray[idx]; if (!tsptr->CurrentQueuedCount) continue; /* scan through all entries on this list */ leptr = tsptr->QueuedList.HeadPtr; while (leptr) { rqptr = (REQUEST_STRUCT*)leptr->DataPtr; /* IMMEDIATELY get a pointer to the next in the list */ leptr = leptr->NextPtr; /* if we're looking for a particular request and this is not it */ if (ConnectNumber && rqptr->ConnectNumber != ConnectNumber) continue; /* if only purging scripts running as a specific VMS user */ if (RemoteUser && RemoteUser[0]) if (!strsame (rqptr->RemoteUser, RemoteUser, -1)) continue; /* if only purging matching scripts */ if (ScriptName && ScriptName[0]) if (!StringMatch (NULL, rqptr->ScriptName, ScriptName)) continue; if (WithExtremePrejudice) ThrottleRelease (rqptr, NULL, false); else ThrottleRelease (rqptr, NULL, true); DequeuedCount++; /* if dequeuing just the one request then return now */ if (ConnectNumber) return (DequeuedCount); } } if (WATCH_CAT && Watch.Category) WatchThis (WATCHALL, WATCH_REQUEST, "THROTTLE control !AZed !UL", WithExtremePrejudice ? "terminat" : "releas", DequeuedCount); return (DequeuedCount); } /*****************************************************************************/ /* Provide a report on the current state and history of any throttled paths. */ ThrottleReport (REQUEST_STRUCT *rqptr) { static char BeginPage [] = "

\n\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \n"; static char ThrottlePathFao [] = "\ \ \ \ \ \ \ \ \ \ \ \ \ \ \n\ \ \ \ \ \ \ \ \ \ \ \ \n"; static char ThrottleBeginPerUserFao [] = "\n"; static char ButtonsFao [] = "
Queued \ Processing \ Per-User
Path / UserTotalBusy \ TotalCurMaxFIFOT/OqT/Ob \ CurMax \ CurMax 
!3ZL!AZ\ throttle=!UL!&@,!UL,!UL,!UL,!AZ,!AZ!&L!&L \ !&L!&L!&L!&L!&L!&L \ !&L!&L \ !&L!&L
!UL%!UL%!UL%!UL%!UL%
"; static char ThrottlePerUserFao [] = "!AZ !UL,!UL,!UL  \n"; static char ThrottleEndPerUserFao [] = "
\n\

\n\ *throttle=n1[/u1],n2,n3,n4,to1,to2\n\
\n\ n1, concurrent requests before queuing\n\
u1, per-user concurrent requests before queuing\n\
n2, concurrent requests before FIFO processing\n\
n3, concurrent requests before FIFO processing ceases again\n\
n4, concurrent requests before immediate "busy"\n\
to1, maximum period queued before processing \ (if not limited by n3)\n\
to2, maximum period queued before "busy" \ (from expiry of any to1)\n\
\n\ **all percentages are of path total\n\
***user data is queued, processing, total\n\
\n\ \n\ \n\ \n\
\n\
\n\ \n\
\n\
\n\
\n\ \n\
\n\
\n\
\n\ \n\
\n\
\n\
Requests\n\
\n\ !AZ\ \n\ \n\ \n"; int idx, status; unsigned long *vecptr; unsigned long FaoVector [32]; MAP_RULE_META EmptyRuleJustInCase; MAP_RULE_META *mrptr; THROTTLE_STRUCT *tsptr; THROTTLE_PER_USER_STRUCT *tpuptr; /*********/ /* begin */ /*********/ if (WATCHMOD (rqptr, WATCH_MOD_THROTTLE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_THROTTLE, "ThrottleReport() !UL", MappingMetaPtr->ThrottleTotal); AdminPageTitle (rqptr, "Throttle Report", BeginPage); for (idx = 0; idx < MappingMetaPtr->ThrottleTotal; idx++) { tsptr = &ThrottleArray[idx]; /* get details of the throttle rule using the index number */ if (!(mrptr = MapUrl_ThrottleRule (idx))) { memset (mrptr = &EmptyRuleJustInCase, 0, sizeof(MAP_RULE_META)); EmptyRuleJustInCase.TemplatePtr = "?"; } vecptr = FaoVector; *vecptr++ = idx + 1; *vecptr++ = mrptr->TemplatePtr; *vecptr++ = mrptr->mpPathSet.ThrottleFrom; if (mrptr->mpPathSet.ThrottlePerUser) { *vecptr++ = "/!UL"; *vecptr++ = mrptr->mpPathSet.ThrottlePerUser; } else *vecptr++ = ""; *vecptr++ = mrptr->mpPathSet.ThrottleTo; *vecptr++ = mrptr->mpPathSet.ThrottleResume; *vecptr++ = mrptr->mpPathSet.ThrottleBusy; *vecptr++ = MetaConShowSeconds (rqptr, mrptr->mpPathSet.ThrottleTimeoutQueue); *vecptr++ = MetaConShowSeconds (rqptr, mrptr->mpPathSet.ThrottleTimeoutBusy); *vecptr++ = tsptr->TotalCount; *vecptr++ = tsptr->Total503Count; *vecptr++ = tsptr->TotalQueuedCount; *vecptr++ = tsptr->CurrentQueuedCount; *vecptr++ = tsptr->MaxQueuedCount; *vecptr++ = tsptr->TotalFiFoCount; *vecptr++ = tsptr->TotalTimeoutQueueCount; *vecptr++ = tsptr->TotalTimeoutBusyCount; *vecptr++ = tsptr->CurrentProcessingCount; *vecptr++ = tsptr->MaxProcessingCount; *vecptr++ = tsptr->CurrentPerUserCount; *vecptr++ = tsptr->MaxPerUserCount; *vecptr++ = PercentOf(tsptr->Total503Count,tsptr->TotalCount); *vecptr++ = PercentOf(tsptr->TotalQueuedCount,tsptr->TotalCount); *vecptr++ = PercentOf(tsptr->TotalFiFoCount,tsptr->TotalCount); *vecptr++ = PercentOf(tsptr->TotalTimeoutQueueCount,tsptr->TotalCount); *vecptr++ = PercentOf(tsptr->TotalTimeoutBusyCount,tsptr->TotalCount); FaoCheck (sizeof(FaoVector), &FaoVector, vecptr, FI_LI); status = FaolToNet (rqptr, ThrottlePathFao, &FaoVector); if (VMSnok (status)) ErrorNoticed (rqptr, status, NULL, FI_LI); /* next throttle entry if there is no per-user list associated */ if (!tsptr->FirstUserPtr) continue; FaolToNet (rqptr, ThrottleBeginPerUserFao, NULL); tpuptr = tsptr->FirstUserPtr; while (tpuptr) { vecptr = FaoVector; *vecptr++ = tpuptr->RemoteUser; *vecptr++ = tpuptr->CurrentQueuedCount; *vecptr++ = tpuptr->CurrentProcessingCount; *vecptr++ = tpuptr->TotalCount; FaoCheck (sizeof(FaoVector), &FaoVector, vecptr, FI_LI); status = FaolToNet (rqptr, ThrottlePerUserFao, &FaoVector); if (VMSnok (status)) ErrorNoticed (rqptr, status, NULL, FI_LI); tpuptr = tpuptr->NextUserPtr; } FaolToNet (rqptr, ThrottleEndPerUserFao, NULL); } if (!idx) FaoToNet (rqptr, "(none)\n", NULL); vecptr = FaoVector; *vecptr++ = ADMIN_CONTROL_THROTTLE_ZERO; *vecptr++ = ADMIN_CONTROL_THROTTLE_RELEASE; *vecptr++ = ADMIN_CONTROL_THROTTLE_TERMINATE; *vecptr++ = ADMIN_REPORT_REQUEST_THROTTLE; *vecptr++ = AdminRefresh(); FaoCheck (sizeof(FaoVector), &FaoVector, vecptr, FI_LI); status = FaolToNet (rqptr, ButtonsFao, &FaoVector); if (VMSnok (status)) ErrorNoticed (rqptr, status, NULL, FI_LI); rqptr->rqResponse.PreExpired = PRE_EXPIRE_ADMIN; ResponseHeader200 (rqptr, "text/html", &rqptr->NetWriteBufferDsc); AdminEnd (rqptr); } /*****************************************************************************/