/*****************************************************************************/ /* Instance.c The INSTANCE module contains functions used to setup, maintain and coordinate action between, multiple servers running on a single system, alone or in combination with multiple servers running across a cluster. An "instance" in this context refers to the (almost) completely autonomous server process. A large portion of the required functionality concerns itself with synchronization and communication between these instances (servers) using the Distributed Lock Manager (DLM) and mutexes. Multiple processes (instances) can share incoming requests by each assigning a channel to the same BG: pseudo-device created with the appropriate listening socket characteristics. Each will receive a share of the incoming requests in a round-robin distribution. See NET.C for more information on this particular aspect and it's implementation. VMS CLUSTERING COMPARISON ------------------------- The approach WASD has used in providing multiple instance serving may be compared to VMS clustering. A cluster is often described as a loosely-coupled, distributed operating environment where autonomous processors can join, process and leave (even fail) independently, participating in a single management domain and communicating with one another for the purposes of resource sharing and high availability. Similarly WASD instances run in autononmous, detached processes (across one or more systems in a cluster) using a common configuration and management interface, aware of the presence and activity of other instances (via the DLM and shared memory), sharing processing load and providing rolling restart and automatic failover as required. LOAD SHARING ------------ On a multi-CPU system there are performance advantages to having processing available for scheduling on each. WASD employs AST (I/O) based processing and was not originally designed to support VMS kernel threading. Benchmarking has shown this be quite fast and efficient even when compared to a kernel-threaded server (OSU) across 2 CPUs. The advantage of multiple CPUs for a single multi-threaded server also diminishes where a site frequently activates scripts for processing. These of course (potentially) require a CPU each for processing. Where a system has many CPUs (and to a lesser extent with only two and few script activations) WASD's single-process, AST-driven design would scale more poorly. Running multiple WASD instances addresses this. Of course load sharing is not the only advantage to multiple instances ... RESTART ------- When multiple WASD instances are executing on a node and a restart is directed only one process shuts down at a time. The rest remain available for requests until the one restarting is fully ready to again process them itself. FAIL-THROUGH ------------ When multiple instances are executing on a node and one of these exits for some reason (bugcheck, resource exhaustion, etc.) the other(s) will continue to process requests. Of course requests in-progress by the particular instance at the time of instance failure are disconnected. If the former process has actually exited (in contrast to just the image) a new server process will automatically be created after a few seconds. ACTIVE/PASSIVE -------------- Implemented in NetActive() and NetPassive(), and under the control of CLI and Server Admin directives, instances can operate in either of two modes. ACTIVE mode; (classic/historical WASD instance processing, with all instances sharing the request processing load. PASSIVE mode; where only the supervisor instance is processing requests, other instances are instantiated but quiescent. One of the issues with multiple instances is use of the WATCH facility. WATCH necessarily can deal with only one instance at a time (tied as it is via a network connection and the associated per-process socket). It becomes a very hit-and-miss activity to try and capture particular events on multi-instance sites. The only solution, without (before) passive instances was to reduce the site to a single instance (requires a restart) and WATCH only that. Making instance processing passive is a (relatively) transparent action that confines request processing to the one (supervisor) instance only. This allows WATCH to be used much more effectively. When the activity is complete just move the instances back to active mode. Although described here in the INSTANCE.C module all the functionality is implemented in the NET.C module. To move into passive mode the mechanism is simply to dequeue ($CANCEL) all the connection acceptance QIOs on all instances but the supervisor. Very simple all the other instances no longer respond to connection requests. To move to active mode new accepts are queued restoring the instance(s) to processing. Elegant and functional! Instance failover is still maintained by having a previously passive, non-supervisor instance receiving the supervisor lock AST check and enable active mode on it's sockets as required. LOCK RESOURCE NAMES ------------------- With IPv6 support with WASD v8.5 lock resource names needed to be changed from the previously all ASCII to a binary representation. To continue using locks to coordinate socket usage the previously hexadecimal representation for the 32 bit IPv4 address and 16 bit port number needed to be expanded to accomodate the 128 bit IPv6 address and 16 bit port. For this to fit into a 31 character resource name the address/port data needed to be represented in binary and other information (e.g. naming version) needed to be compressed (also into a binary representation). per-cluster specific WASD|v|g|f WASD.. per-node specific WASD|v|g|node|f WASD.KLAATU. per-node socket WASD|v|g|node|ap WASD.KLAATU................. admin socket WASD|v|g|node::WASD:port WASD.KLAATU::WASD:80 where v is the 4 bit WASD instance lock version g is the 4 bit "environment" number (0..15) f is the 8 bit lock function (0..31) ap is the 32 bit or 128 bit address plus the 16 bit port These locks can be located in the System Dump Analyzer using SDa> SHOW LOCK /name=WASD The "per-cluster specific" are used to synchronize and communicate through locking across all nodes in a cluster. The "per-node specific" are used to synchronize and communicate through locking access to various resources shared amongst servers executing on the one node. The "per-node socket" is used to distribute information about the BG: device names of sockets already created by instances on the node. The device names store in the lock value blocks then allow subsequent instances to just assign channels to share the listen-for requests. The "admin socket" distributes per-instance (process) administration socket port across the node/cluster. This administration socket is required to allow a site administrator consistent access to a single instance (normally of course the socket sharing between instances means that requests are distributed between processes in a round-robin fashion). As this contains only the port number (in decimal) it assumes that there is a host name entry for each of the instance node names. MUTEX USAGE ----------- Using of the DLM for short-duration locking of shared memory access is probably an overly-expensive approach. So for these activities a mutex in shared memory is used. Multiple such mutexes are supported to provide maximum granularity when distributed across various activities. See InstanceMutexLock() for further detail. INSTANCE STATUS --------------- This facility distributes basic instance status data to all instances on the node and/or cluster. The data comprises: o instance name e.g. "KLAATU::WASD:443" o instance WASD version e.g. "11.2.0" o number of requests processed during the previous minute o number of requests processed during the previous hour o number of times the instance has started up o date/time the instance last started o date/time the instance last exited o the VMS status at the last exit o date/time the instance status was updated The status data for the maximum instances per cluster (MAX_INSTANCE of 8 by a maximum of 8 nodes, or 64 instances) is maintained in a table in the global section accounting data. For a single node (cf. clustered node) the data (for a single or multiple instances) is updated by the individual instance from which that data is generated. On a single, non-clustered node the DLM is not used for data distribution. For clustered nodes the data is distributed to other nodes using the 64 byte lock value block, once per and from that maintained in the node's global section accounting data. Data incoming via the DLM is ignored unless from a different node name. Only the supervisor on each node is required to listen for and maintain incoming data. When an instance becomes the node supervisor it begins listening to the INSTANCE_CLUSTER_STATUS lock. These statuses are then provided directly from the table to command-line and in-browser reports. The instances are listed in the order in which they were introduced to the pool of per-node and/or per-cluster instances. Obviously these reports are intended for larger WASD installations, primarily those operating across multiple nodes in a cluster. With the data being stored in a common, another of those other nodes can provide a per-cluster history even if one or more nodes become completely non-operational. VERSION HISTORY --------------- 28-APR-2018 MGD refactor Admin..() AST delivery InstanceSupervisor() simpify ticket key refresh 12-JAN-2018 MGD bugfix; longstanding InstanceSocketForAdmin() sys$deq() 24-OCT-2017 MGD InstanceStatus..() see above some supporting changes to locking 21-JUN-2017 MGD InstanceUseConfig() ensure config file values used 09-MAY-2017 MGD bugfix; SesolaSessionTicketNewKey() sigh some more :-[ 28-APR-2017 MGD InstanceSessionTicketKey() rework multi-instance/cluster (sigh! yes again; the lack of a test cluster these days) 25-JUL-2016 MGD InstanceSessionTicketKey() rework multi-instance rotate 09-JUL-2016 MGD CLI /INSTANCE= now sets global section |InstanceMax| to allow the created process to continue to exist and when used needs to be reset with the likes of /INSTANCE=1 12-JUN-2016 MGD InstanceSupervisor() refresh session ticket key every day InstanceLockList() supervisor list process NL locks allows node lists to be listed in supervisory order 10-MAY-2015 MGD bugfix; move supervisor PID from InstanceNodeSupervisor() to InstanceNodeSupervisorAst() 03-DEC-2014 MGD InstanceSupervisor() global section ->InstanceNodeCurrent 13-JAN-2014 MGD InstanceGblSecSetLong() add InstanceNumber for identifying from process name 08-OCT-2009 MGD if HttpdServerStartup delay additional instance startup 16-AUG-2009 MGD bugfix; InstanceSupervisor() InstanceProcessName() prcnam 11-JUL-2009 MGD InstanceSupervisor() and InstanceProcessName() move process naming from "HTTPd:" to "WASD:" with backward-compatibility via WASD_PROCESS_NAME 05-NOV-2006 MGD it would appear that at least IA64 returns a lock value block length of 64 regardless of whether it is empty or not! 15-JUL-2006 MGD instance active and passive modes InstanceNodeSupervisorAst() calls NetActive(TRUE) refinements to controlled restart 04-JUL-2006 MGD use PercentOf() for more accurate percentages 25-MAY-2005 MGD allow for VMS V8.2 64 byte lksb$b_valblk 17-NOV-2004 MGD InstanceLockReportData() rework blocked-by/blocking into general indication of non-GR queue (underline) 10-APR-2004 MGD significant modifications to support IPv6, lock names now contain non-ASCII, binary components, remove never-used InstanceLockReportCli() and /LOCK 27-JUL-2003 MGD bugfix; use _BBCCI() to clear the mutex in InstanceExit()!! 19-JUN-2003 MGD bugfix; use _BBCCI() to clear the mutex 31-MAR-2003 MGD bugfix; add &puser= to lock 'show process' report 30-MAY-2002 MGD restart when 'quiet' 20-MAY-2002 MGD move more 'locking' functions over to using a 'mutex' 10-APR-2002 MGD some refinement to single-instance locking 31-MAR-2002 MGD use a more light-weight 'mutex' instead of DLM lock around general global section access 28-DEC-2001 MGD refine 'instance' creation/destruction 22-SEP-2001 MGD initial */ /*****************************************************************************/ #ifdef WASD_VMS_V7 #undef _VMS__V6__SOURCE #define _VMS__V6__SOURCE #undef __VMS_VER #define __VMS_VER 70000000 #undef __CRTL_VER #define __CRTL_VER 70000000 #endif /* standard C header files */ #include #include #include /* VMS related header files */ #include #include #include #include #include #include #include #include #include #include #include /* application-related header files */ #include "wasd.h" #define WASD_MODULE "INSTANCE" /******************/ /* global storage */ /******************/ BOOL InstanceNodeSupervisor, InstanceWasdName = true; int InstanceClusterCurrent, InstanceEnvNumber, InstanceLockNameMagic, InstanceNodeConfig, InstanceNodeCurrent, InstanceNodeJoiningCount, InstanceNumber = 0, /* instances disabled */ InstanceStatusRetry, InstanceStatusRequestCount, InstanceLockReportNameWidth, InstancePrevRequestCount, InstanceSocketCount, InstanceSupervisorPoll; char *InstanceGroupChars [] = { "","","2","3","4","5","6","7","8","9", "a","b","c","d","e","f" }, *InstanceHttpChars [] = { "d","d","e","f","g","h","i","j","k" }, *InstanceWasdChars [] = { "","1","2","3","4","5","6","7","8" }; #if sizeof(WasdChars)/sizeof(char*) < INSTANCE_MAX #error "InstanceProcessName() WasdChars[] needs adjustment" #endif INSTANCE_LOCK InstanceLockAdmin; INSTANCE_LOCK InstanceLockTable [INSTANCE_LOCK_COUNT+1]; INSTANCE_SOCKET_LOCK InstanceSocketTable [INSTANCE_LOCK_SOCKET_MAX]; INSTANCE_STATUS *InstanceStatusTablePtr; BOOL InstanceMutexHeld [INSTANCE_MUTEX_COUNT+1]; ulong InstanceMutexCount [INSTANCE_MUTEX_COUNT+1], InstanceMutexWaitCount [INSTANCE_MUTEX_COUNT+1]; char *InstanceMutexDescr [INSTANCE_MUTEX_COUNT+1] = INSTANCE_MUTEX_DESCR; /********************/ /* external storage */ /********************/ extern BOOL CliInstanceNoCrePrc, CliInstancePassive, ControlRestartQuiet, ControlRestartRequested, HttpdNetworkMode, HttpdServerStartup, HttpdTicking; ProtocolHttpsAvailable, ProtocolHttpsConfigured; extern int CliInstanceMax, EfnWait, EfnNoWait, ExitStatus, HttpdTickSecond, NetCurrentProcessing, RequestCount, ServerPort, SesolaTicketKeySuperDay; extern int ToLowerCase[], ToUpperCase[]; extern ulong CrePrcMask[], HttpdTime64[], HttpdStartTime64[], SysLckMask[], WorldMask[]; extern ushort HttpdNumTime[]; extern char ErrorSanityCheck[], ErrorXvalNotValid[], HttpdVersion[]; extern uchar SesolaTicketKey[]; extern ACCOUNTING_STRUCT *AccountingPtr; extern CONFIG_STRUCT Config; extern HTTPD_GBLSEC *HttpdGblSecPtr; extern HTTPD_PROCESS HttpdProcess; extern MSG_STRUCT Msgs; extern SYS_INFO SysInfo; extern WATCH_STRUCT Watch; /*****************************************************************************/ /* Initialize the per-node and per-cluster lock resource names, queuing a NL lock against each resource. This NL lock will then be converted to other modes as required. */ InstanceLockInit () { static int LockCode [] = { INSTANCE_LOCK_CODES }; int cnt, status, NameLength; char *cptr, *sptr, *zptr; INSTANCE_LOCK *ilptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceLockInit()"); if ((HTTPD_LOCK_VERSION & 0xf) > 15) ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); if (sizeof(INSTANCE_STATUS) != 64) ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); if (InstanceEnvNumber > INSTANCE_ENV_NUMBER_MAX) { FaoToStdout ("%HTTPD-E-INSTANCE, environment range 1 to !UL\n", DEMO_INSTANCE_GROUP_NUMBER); exit (SS$_BADPARAM); } /* a byte comprising two 4 bit fields, version and environment number */ InstanceLockNameMagic = ((HTTPD_LOCK_VERSION & 0xf) << 4) | (InstanceEnvNumber & 0xf); InstanceSupervisorPoll = INSTANCE_SUPERVISOR_POLL; sys$setprv (1, &SysLckMask, 0, 0); for (cnt = 1; cnt <= INSTANCE_LOCK_COUNT; cnt++) { ilptr = &InstanceLockTable[cnt]; /* build the (binary) resource name for each non-socket lock */ zptr = (sptr = ilptr->Name) + sizeof(ilptr->Name)-1; for (cptr = HTTPD_NAME; *cptr && sptr < zptr; *sptr++ = *cptr++); if (sptr < zptr) *sptr++ = (char)InstanceLockNameMagic; if (cnt > INSTANCE_CLUSTER_LOCK_COUNT) { cptr = SysInfo.NodeName; while (*cptr && sptr < zptr) *sptr++ = *cptr++; } if (sptr < zptr) *sptr++ = (char)LockCode[cnt]; *sptr = '\0'; /* not at all necessary */ NameLength = sptr - ilptr->Name; ilptr->NameLength = NameLength; ilptr->NameDsc.dsc$w_length = NameLength; ilptr->NameDsc.dsc$a_pointer = &ilptr->Name; if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchDataDump (ilptr->Name, ilptr->NameLength); /* this is the basic place-holding, resource instantiating lock */ status = sys$enqw (EfnWait, LCK$K_NLMODE, &ilptr->Lksb, LCK$M_EXPEDITE | LCK$M_SYSTEM, &ilptr->NameDsc, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = ilptr->Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); } sys$setprv (0, &SysLckMask, 0, 0); } /*****************************************************************************/ /* Queue a conversion to a blocking EX mode lock on the node supervisor resource. Whichever process holds this lock for the image lifetime and has the dubious honour of performing tasks related to the per-node instances (e.g. creating processes to provide the configured instances of the server). */ InstanceServerInit () { int cnt, status; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceServerInit()"); sys$setprv (1, &SysLckMask, 0, 0); /* convert to a CR lock on the cluster membership resource */ InstanceLockTable[INSTANCE_CLUSTER].InUse = true; status = sys$enqw (EfnWait, LCK$K_CRMODE, &InstanceLockTable[INSTANCE_CLUSTER].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = InstanceLockTable[INSTANCE_CLUSTER].Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); /* convert to a CR lock on the node membership resource */ InstanceLockTable[INSTANCE_NODE].InUse = true; status = sys$enqw (EfnWait, LCK$K_CRMODE, &InstanceLockTable[INSTANCE_NODE].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = InstanceLockTable[INSTANCE_NODE].Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); /* be notified whenever a node-instance joins */ InstanceNotifySet (INSTANCE_NODE_JOINING, &InstanceNodeJoiningAst); /* notify others that we're joining */ status = InstanceNotifyWait (INSTANCE_NODE_JOINING, NULL, INSTANCE_JOINING_WAIT_SECS); if (VMSnok (status)) ErrorExitVmsStatus (status, "InstanceReady()", FI_LI); /* queue up for our turn to be the instance node supervisor */ InstanceNodeSupervisor = false; InstanceLockTable[INSTANCE_NODE_SUPERVISOR].InUse = true; /* note: this is NOT a sys$enqw(), it's asynchronous */ status = sys$enq (EfnNoWait, LCK$K_EXMODE, &InstanceLockTable[INSTANCE_NODE_SUPERVISOR].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, &InstanceNodeSupervisorAst, 0, 0, 0, 2, 0); if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enq()", FI_LI); sys$setprv (0, &SysLckMask, 0, 0); } /*****************************************************************************/ /* Check if there is already a configured single instance executing on this node. If this instance is configured to be a single instance then check how many other instances (potentially) are executing. If more than one exit with an error message - you can't have one instance wandering around thinking it's the only one. If this instance is configured to be one of multiple and there is already one instance thinking it's the only one around then exit for the complementary reason. */ InstanceSingleInit () { int status; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceSingleInit()"); sys$setprv (1, &SysLckMask, 0, 0); /* convert to an EX lock on the single instance resource */ InstanceLockTable[INSTANCE_NODE_SINGLE].InUse = true; status = sys$enqw (EfnWait, LCK$K_EXMODE, &InstanceLockTable[INSTANCE_NODE_SINGLE].Lksb, LCK$M_NOQUEUE | LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); if (status == SS$_NOTQUEUED) { FaoToStdout ( "%HTTPD-E-INSTANCE, single instance already executing - exiting\n"); /* cancel any startup messages provided for the monitor */ HttpdGblSecPtr->StatusMessage[0] = '\0'; if (HttpdProcess.Mode == JPI$K_INTERACTIVE) exit (SS$_ABORT | STS$M_INHIB_MSG); else { InstanceExit (); sys$delprc (0, 0); } } else if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); if (InstanceNodeConfig > 1) { /* multiple instances; successfully queued, convert back to NL mode */ status = sys$enqw (EfnWait, LCK$K_NLMODE, &InstanceLockTable[INSTANCE_NODE_SINGLE].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = InstanceLockTable[INSTANCE_NODE_SINGLE].Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); InstanceLockTable[INSTANCE_NODE_SINGLE].InUse = false; } /* else single instance; else leave it at EX mode */ sys$setprv (0, &SysLckMask, 0, 0); } /*****************************************************************************/ /* Establish how many per-node instances are allowed on this system. Must be called after the server configuration is loaded. If the number of instances specified is negative this sets the number of instances to be the system CPU count minus that number. At least one instance will always be set. */ InstanceFinalInit () { int status, NodeCount, StartupMax; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceFinalInit()"); if (CliInstanceMax < 0) StartupMax = SysInfo.AvailCpuCnt + CliInstanceMax; else if (CliInstanceMax > 0) StartupMax = CliInstanceMax; else if (CliInstanceMax == INSTANCE_PER_CPU) StartupMax = SysInfo.AvailCpuCnt; else StartupMax = 0; if (StartupMax) { /* for example: /INSTANCE=-99 */ if (StartupMax < 0) StartupMax = 0; InstanceMutexLock (INSTANCE_MUTEX_HTTPD); HttpdGblSecPtr->InstanceStartupMax = StartupMax; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); } InstanceMutexLock (INSTANCE_MUTEX_HTTPD); StartupMax = HttpdGblSecPtr->InstanceStartupMax; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); if (StartupMax == INSTANCE_PER_CPU) FaoToStdout ("%HTTPD-W-INSTANCE, explicitly set to CPU \ (not using configuration)\n"); else if (StartupMax) FaoToStdout ("%HTTPD-W-INSTANCE, explicitly set to !SL \ (not using configuration)\n", StartupMax); if (StartupMax == INSTANCE_PER_CPU) InstanceNodeConfig = SysInfo.AvailCpuCnt; else if (StartupMax < 0) InstanceNodeConfig = SysInfo.AvailCpuCnt + StartupMax; else if (StartupMax > 0) InstanceNodeConfig = StartupMax; else if (Config.cfServer.InstanceMax == INSTANCE_PER_CPU) InstanceNodeConfig = SysInfo.AvailCpuCnt; else if (Config.cfServer.InstanceMax < 0) InstanceNodeConfig = SysInfo.AvailCpuCnt + Config.cfServer.InstanceMax; else if (Config.cfServer.InstanceMax > 0) InstanceNodeConfig = Config.cfServer.InstanceMax; else InstanceNodeConfig = 1; /* minimum one, maximum eight (somewhat arbitrary but let's be sensible) */ if (InstanceNodeConfig < 1) InstanceNodeConfig = 1; else if (InstanceNodeConfig > INSTANCE_MAX) InstanceNodeConfig = INSTANCE_MAX; /* lets check that's it OK to go ahead with this configuration */ InstanceSingleInit (); NodeCount = InstanceLockList (INSTANCE_NODE, NULL, NULL); if (NodeCount > 1 && InstanceLockTable[INSTANCE_NODE_SINGLE].InUse) { FaoToStdout ( "%HTTPD-W-INSTANCE, multiple instances already executing - exiting\n"); /* cancel any startup messages provided for the monitor */ HttpdGblSecPtr->StatusMessage[0] = '\0'; InstanceExit (); sys$delprc (0, 0); } if (!CliInstanceNoCrePrc && NodeCount > InstanceNodeConfig) { FaoToStdout ("%HTTPD-W-INSTANCE, sufficient processes - exiting\n"); /* cancel any startup messages provided for the monitor */ HttpdGblSecPtr->StatusMessage[0] = '\0'; InstanceExit (); sys$delprc (0, 0); } if (NodeCount == 1) { /* first-in sets the pace */ InstanceMutexLock (INSTANCE_MUTEX_HTTPD); HttpdGblSecPtr->InstancePassive = CliInstancePassive || Config.cfServer.InstancePassive; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); } if (InstanceNodeConfig == 1) FaoToStdout ("%HTTPD-I-INSTANCE, 1 process\n"); else FaoToStdout ("%HTTPD-I-INSTANCE, !UL processes\n", InstanceNodeConfig); } /*****************************************************************************/ /* Ready to process requests, just do a lock conversion to CR to indicate this. Queue lock requests for multi-instance notifications. */ InstanceReady () { int status; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceReady()"); sys$setprv (1, &SysLckMask, 0, 0); /* ready to accept */ InstanceLockTable[INSTANCE_NODE_READY].InUse = true; status = sys$enqw (EfnWait, LCK$K_CRMODE, &InstanceLockTable[INSTANCE_NODE_READY].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = InstanceLockTable[INSTANCE_NODE_READY].Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); /* receive control messages from other instances */ InstanceNotifySet (INSTANCE_NODE_DO, &ControlHttpdAst); InstanceNotifySet (INSTANCE_CLUSTER_DO, &ControlHttpdAst); sys$setprv (0, &SysLckMask, 0, 0); } /*****************************************************************************/ /* Ensure config file values used not any lingering /DO=INSTANCE=.. */ int InstanceUseConfig () { /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceUseConfig()"); InstanceMutexLock (INSTANCE_MUTEX_HTTPD); HttpdGblSecPtr->InstanceStartupMax = 0; HttpdGblSecPtr->InstancePassive = false; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); return (SS$_NORMAL); } /*****************************************************************************/ /* A node has just notified that it's in the process of joining the group of node instance(s). Determine the new number of instances on this node so that this information may be used when locking, displaying administration reports, etc. */ InstanceNodeJoiningAst () { /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceNodeJoiningAst()"); /* note that the instance composition may have changed */ InstanceNodeJoiningCount++; /* kick off ticking to initiate any supervisory activities */ if (!HttpdTicking) HttpdTick (0); } /*****************************************************************************/ /* We've just become the node supervisor!! Either this is the first instance on the node or some other server process (or image) has exited and this process was the next in the conversion queue. */ InstanceNodeSupervisorAst (int AstParam) { /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceNodeSupervisorAst()"); InstanceNodeSupervisor = true; FaoToStdout ("%HTTPD-I-INSTANCE, supervisor\n"); /* ensure supervisor is accepting connections! */ NetActive (true); /* be provided with cluster-wide instance status updates */ AccountingPtr->InstanceNodeData[InstanceNumber].SupervisorUpdate = false; InstanceNotifySet (INSTANCE_CLUSTER_STATUS, &InstanceStatusUpdate); /* kick off ticking to initiate any supervisory activities */ if (!HttpdTicking) HttpdTick (0); } /*****************************************************************************/ /* When the server is processing requests this function is called by HttpdTick() every second. Only one process per node is allowed to perform the activities in this function. At least one node must perform these activities. Returns true to keep the server supervisor ticking, false to say no longer necessary. If a control restart has been requested then only the supervisor node is allowed to restart at any one time (of course all get a turn after one has exited because of the queued supervisor lock being delivered). This is what enables the rolling restart. Using InstanceLockList() get the current number of locks queued against the node lock and from that knows how many instances (processes) are currently executing. If less than the required number of instances then create a new server process. */ BOOL InstanceSupervisor () { static BOOL NeedsInstance; static int NodeJoiningCount, PollHttpdTickSecond, RefreshTicketKeySeconds, RestartHttpdTickSecond, RestartQuietCount, ShutdownCount = 30; static char PrcNam [16]; static $DESCRIPTOR (PrcNamDsc, PrcNam); static ulong JpiPid; static VMS_ITEM_LIST3 JpiItems [] = { { sizeof(JpiPid), JPI$_PID, &JpiPid, 0 }, { 0,0,0,0 } }; int idx, status, LockCount, InstanceNodeReady, StartupMax; ushort Length; IO_SB IOsb; INSTANCE_STATUS *isptr; INSTANCE_NODE_DATA *indptr; /*********/ /* begin */ /*********/ if (!InstanceNodeSupervisor) { RefreshTicketKeySeconds = -1; return (false); } if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceSupervisor()"); InstanceNodeCurrent = InstanceLockList (INSTANCE_NODE, NULL, NULL); InstanceClusterCurrent = InstanceLockList (INSTANCE_CLUSTER, NULL, NULL); InstanceMutexLock (INSTANCE_MUTEX_HTTPD); HttpdGblSecPtr->InstanceNodeCurrent = InstanceNodeCurrent; HttpdGblSecPtr->InstanceClusterCurrent = InstanceClusterCurrent; /* check if any other instance has barfed and needs its status updated */ for (idx = 0; idx < INSTANCE_MAX; idx++) { if (!AccountingPtr->InstanceNodeData[idx].SupervisorUpdate) continue; indptr = &AccountingPtr->InstanceNodeData[idx]; if (isptr = InstanceStatusFind (indptr->InstanceName)) if (VMSok (InstanceNotifyWait (INSTANCE_CLUSTER_STATUS, isptr, 0))) indptr->SupervisorUpdate = false; } /* the node supervisor gets to provide it's PID to HTTPDMON, etc. */ HttpdGblSecPtr->HttpdProcessId = HttpdProcess.Pid; InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); /*******************/ /* restart request */ /*******************/ if (ControlRestartRequested) { if (InstanceNodeCurrent > 1) { /* there is more than one instance executing */ InstanceNodeReady = InstanceLockList (INSTANCE_NODE_READY, NULL, NULL); if (InstanceNodeReady == InstanceNodeCurrent) { /* all of those are processing so restart immediately */ if (ShutdownCount > 0) ShutdownCount = 0; } else { /* There are less ready to process that are executing so wait a maximum of thirty seconds for more to start processing. */ ShutdownCount = 30; } } else if (InstanceNodeConfig == 1) { /* if started with only one instance then restart immediately */ if (ShutdownCount > 0) ShutdownCount = 0; } else if (ShutdownCount > 5) { /* A single instance (this one) is currently executing. Wait five seconds to see if more become current and if none does restart. */ ShutdownCount = 5; } if (ShutdownCount <= 0) { if (!NetCurrentProcessing) { /* no outstanding requests */ FaoToStdout ("%HTTPD-I-CONTROL, server restart\n"); exit (SS$_NORMAL); } if (ShutdownCount == 0) { /* stop receiving incoming connections */ NetShutdownServerSocket (); } if (ShutdownCount < -300) { /* five minutes is a *long* wait for a request to finish! */ FaoToStdout ("%HTTPD-W-CONTROL, server restart timeout\n"); exit (SS$_NORMAL); } } ShutdownCount--; /* don't want to do any of the normal supervisor duties if restarting! */ return (true); } if (ControlRestartQuiet) { if (NetCurrentProcessing) RestartQuietCount = 0; else if (RestartQuietCount++ > 1) { FaoToStdout ("%HTTPD-I-CONTROL, server restart when quiet\n"); exit (SS$_NORMAL); } return (true); } /**********************/ /* ticket key refresh */ /**********************/ if (InstanceNodeJoiningCount != NodeJoiningCount) { /* ensure all nodes are using the same ticket */ NodeJoiningCount = InstanceNodeJoiningCount; /* give it 30 seconds for the cluster to settle */ RefreshTicketKeySeconds = 30; } else if (RefreshTicketKeySeconds > 0) RefreshTicketKeySeconds--; else if (RefreshTicketKeySeconds == 0) { /* (re)try session ticket key refresh */ if (VMSok (InstanceSessionTicketKey (SesolaTicketKey))) RefreshTicketKeySeconds = -1; else RefreshTicketKeySeconds = 30; } else if (SesolaTicketKeySuperDay != HttpdNumTime[2]) { /* refresh session ticket key once a day */ SesolaSessionTicketNewKey(); RefreshTicketKeySeconds = SesolaTicketKeySuperDay = 0; } /*****************/ /* new instance? */ /*****************/ if (HttpdTickSecond < PollHttpdTickSecond) { /* only every so-many seconds do we do a supervisor poll */ if (!NeedsInstance) return (false); /* return true to keep it ticking only when a new instance is needed */ return (true); } PollHttpdTickSecond = HttpdTickSecond + InstanceSupervisorPoll; if (!CliInstanceNoCrePrc && InstanceNodeCurrent < InstanceNodeConfig) { if (!NeedsInstance || HttpdServerStartup) { /* keep it ticking until the next supervisor poll */ NeedsInstance = true; return (true); } for (idx = InstanceNodeConfig > 1 ? 1 : 0; idx < INSTANCE_MAX; idx++) { status = FaoToBuffer (PrcNam, sizeof(PrcNam), &Length, "!AZ!AZ!AZ:!UL", InstanceGroupChars[InstanceEnvNumber], InstanceWasdName ? "WASD" : "HTTP", InstanceWasdName ? InstanceWasdChars[idx] : InstanceHttpChars[idx], ServerPort); if (VMSnok (status) || status == SS$_BUFFEROVF) ErrorExitVmsStatus (status, NULL, FI_LI); if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchDataFormatted ("!&Z\n", PrcNam); PrcNamDsc.dsc$w_length = Length; status = sys$getjpiw (EfnWait, 0, &PrcNamDsc, &JpiItems, &IOsb, 0, 0); if (VMSok (status)) status = IOsb.Status; if (status == SS$_NONEXPR) { /* found a process name that should exist and doesn't */ FaoToStdout ("%HTTPD-I-INSTANCE, !20%D, creating \"!AZ\"\n", 0, PrcNam); if (HttpdNetworkMode) if (VMSnok (status = sys$setprv (1, &CrePrcMask, 0, 0))) ErrorExitVmsStatus (status, "sys$setprv()", FI_LI); HttpdDetachServerProcess (); if (HttpdNetworkMode) if (VMSnok (status = sys$setprv (0, &CrePrcMask, 0, 0))) ErrorExitVmsStatus (status, "sys$setprv()", FI_LI); return (true); } if (VMSnok (status)) ErrorNoticed (NULL, status, NULL, FI_LI); } /* instances fully populated, at least according to process names */ } NeedsInstance = false; return (false); } /*****************************************************************************/ /* If multi-instance propagate this to other instances (cluster-wide as applicable) internally using the DLM and (/DO=)TICKET=KEY command. If the lock value block (LVB) is 64 bytes this propagates the one key to all instances and supports session ticket reuse across all instances. If 16 byte LVB only the TICKEY=KEY command is propagated which refreshes the ticket keys independently on each instance but does not provide cross-instance session reuse. If not multi-instance then directly use SesolaSessionTicketUseKey() to propagate it to the local TLS services. */ int InstanceSessionTicketKey (uchar *keyptr) { int status; char Command64 [64]; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceSessionTicketKey()"); if (!(ProtocolHttpsAvailable && ProtocolHttpsConfigured)) return (SS$_NORMAL); if (HttpdGblSecPtr->InstanceNodeCurrent > 1) { memset (Command64, 0, sizeof(Command64)); strcpy (Command64, CONTROL_SESSION_TICKET_KEY); memcpy (Command64 + sizeof(CONTROL_SESSION_TICKET_KEY), keyptr, 48); status = InstanceNotifyWait (INSTANCE_CLUSTER_DO, Command64, INSTANCE_TICKET_WAIT_SECS); if (VMSnok (status)) { ErrorNoticed (NULL, status, CONTROL_SESSION_TICKET_KEY, FI_LI); return (status); } } else SesolaSessionTicketUseKey (keyptr); return (SS$_NORMAL); } /*****************************************************************************/ /* When a pointer to INSTANCE_STATUS data is supplied (i.e. contained in a lock value block) then insert this into the instance's instance data buffer located in the accounting global common. You will not see this with a single node or when there are no clustered instances. When a NULL pointer is supplied populate this instance's INSTANCE_STATUS data in the global common. On a single node or with no clustered instances this is all that's required. If clustered instances use the DLM to provide that data to them. If a DLM update initially fails due to contention then try again with a call each second until it succeeds (or fails after multiple attempts). */ void InstanceStatusUpdate (struct lksb *lksbptr) { static int PrevMinute = -1, RetryCount = 0; static uchar Status64 [LOCK_VALUE_BLOCK_64]; BOOL UpdateNow = false; int idx, max, status, minute; char *cptr, *sptr, *zptr; INSTANCE_NODE_DATA *indptr; INSTANCE_STATUS *isptr, *istptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceStatusUpdate() !8XL \'!AZ\' !UL !UL", lksbptr, Status64, InstanceNodeCurrent, SysInfo.ClusterMember); if (HttpdGblSecPtr->InstanceClusterCurrent > HttpdGblSecPtr->InstanceNodeCurrent) { /* for clustered WASD instances need that 64 byte lock value block */ if (SysInfo.LockValueBlockSize != LOCK_VALUE_BLOCK_64) return; } if (lksbptr && !(UpdateNow = (lksbptr == &InstanceStatusNow))) { /*************************/ /* receive remote update */ /*************************/ isptr = lksbptr->lksb$b_valblk; if (WATCH_CAT && Watch.Category) WatchThis (WATCHALL, WATCH_INTERNAL, "STATUS remote !AZ !AZ !UL !UL", isptr->InstanceName, isptr->HttpdVersion, isptr->MinuteCount, isptr->HourCount); /* when developing always test without this code */ #if !defined(WATCH_MOD) || !(WATCH_MOD) if (InstanceStatusTablePtr) { /* make sure it is from another (clustered) node */ istptr = InstanceStatusTablePtr; for (cptr = istptr->InstanceName; *cptr && *cptr != ':'; cptr++); for (sptr = isptr->InstanceName; *sptr && sptr != ':'; sptr++); if (cptr - istptr->InstanceName == sptr - isptr->InstanceName) if (!strncmp (istptr->InstanceName, isptr->InstanceName, cptr - istptr->InstanceName)) return; } #endif InstanceMutexLock (INSTANCE_MUTEX_HTTPD); /* find existing or create new entry in storage table */ if (istptr = InstanceStatusFind (isptr->InstanceName)) { memcpy (istptr, isptr, sizeof(INSTANCE_STATUS)); PUT_QUAD_QUAD (&HttpdTime64, &istptr->UpdateTime64); if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "!AZ !AZ !UL !UL", istptr->InstanceName, istptr->HttpdVersion, istptr->MinuteCount, istptr->HourCount); } InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); return; } /************************/ /* provide local update */ /************************/ /* not InstanceStatusNow() so only every minute or as a retry */ if (!UpdateNow && !RetryCount && PrevMinute == HttpdNumTime[4]) return; if (RetryCount) { /* retry of a previously failed-to-update (via the DLM) */ istptr = (INSTANCE_STATUS*)&Status64; if (WATCH_CAT && Watch.Category) WatchThis (WATCHALL, WATCH_INTERNAL, "STATUS retry !AZ !AZ !UL !UL", istptr->InstanceName, istptr->HttpdVersion, istptr->MinuteCount, istptr->HourCount); } else { indptr = &AccountingPtr->InstanceNodeData[InstanceNumber]; InstanceMutexLock (INSTANCE_MUTEX_HTTPD); if (InstanceStatusTablePtr) istptr = InstanceStatusTablePtr; else { /**************/ /* initialise */ /**************/ zptr = (sptr = indptr->InstanceName) + sizeof(indptr->InstanceName)-1; for (cptr = SysInfo.NodeName; *cptr && sptr < zptr; *sptr++ = *cptr++); for (cptr = "::"; *cptr && sptr < zptr; *sptr++ = *cptr++); for (cptr = HttpdProcess.PrcNam; *cptr && sptr < zptr; *sptr++ = *cptr++); *sptr = '\0'; if (!(istptr = InstanceStatusFind (indptr->InstanceName))) { /* table exhausted (unlikely but possible) */ InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); return; } InstanceStatusTablePtr = istptr; istptr->StartupCount = indptr->StartupCount; PUT_QUAD_QUAD (&indptr->StartTime64, &istptr->StartTime64); istptr->ExitStatus = indptr->ExitStatus & 0x0fffffff; PUT_QUAD_QUAD (&indptr->ExitTime64, &istptr->ExitTime64); zptr = (sptr = istptr->HttpdVersion) + sizeof(istptr->HttpdVersion)-1; for (cptr = HttpdVersion; *cptr && sptr < zptr; *sptr++ = *cptr++); *sptr = '\0'; } if (PrevMinute != HttpdNumTime[4]) { /**************/ /* per-minute */ /**************/ if (minute = (PrevMinute = HttpdNumTime[4])) minute--; else minute = 59; AccountingPtr->InstanceNodeData[InstanceNumber]. RequestCount[minute] = InstanceStatusRequestCount; InstanceStatusRequestCount = 0; } else if (minute = HttpdNumTime[4]) minute--; else minute = 59; /*******************/ /* populate latest */ /*******************/ /* the minute that has just passed */ istptr->MinuteCount = indptr->RequestCount[minute]; /* the hour just passed by accumulating the last sixty minutes */ istptr->HourCount = 0; for (minute = 0; minute < 60; minute++) istptr->HourCount += indptr->RequestCount[minute]; /* with a single node (no cluster) the table is self-updating */ PUT_QUAD_QUAD (&HttpdTime64, &istptr->UpdateTime64); InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); if (WATCH_CAT && WATCH_CATEGORY(WATCH_INTERNAL)) WatchThis (WATCHALL, WATCH_INTERNAL, "STATUS local !AZ !AZ !UL !UL", istptr->InstanceName, istptr->HttpdVersion, istptr->MinuteCount, istptr->HourCount); } /* always check the DLM distribution when developing */ #if !defined(WATCH_MOD) || !(WATCH_MOD) if (HttpdGblSecPtr->InstanceClusterCurrent > HttpdGblSecPtr->InstanceNodeCurrent) #endif { /***********/ /* use DLM */ /***********/ if (VMSok (InstanceNotifyWait (INSTANCE_CLUSTER_STATUS, istptr, 0))) RetryCount = 0; else if (RetryCount) RetryCount--; else { RetryCount = INSTANCE_STATUS_UPDATE_RETRIES; memcpy (Status64, istptr, sizeof(Status64)); } } } /*****************************************************************************/ /* The instance has been /DO=STATUS=NOW. Call the update function with its own code entry point as a sentinal. This then generates a local update. */ void InstanceStatusNow () { /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceStatusNow()"); InstanceStatusUpdate (&InstanceStatusNow); } /*****************************************************************************/ /* Called by HttpdExit() this is a "last-gasp" update to advise of the instance's exit status. Do not employ a mutex on the accounting data. Do not employ the DLM to advise other instances. Have the node supervisor do that on the exiting instances's behalf, or in the case of a single instance node when it restarts. The report will (eventually) indicate instance data staleness if it doen't come back up. Whatever, avoid the use of additional complexities during error exits. */ void InstanceStatusExit (int ExitStatus) { int status; INSTANCE_NODE_DATA *indptr; INSTANCE_STATUS *istptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceStatusExit()"); indptr = &AccountingPtr->InstanceNodeData[InstanceNumber]; indptr->ExitStatus = ExitStatus & 0x0fffffff; sys$gettim (indptr->ExitTime64); /* if (for whatever reason) there is no entry in the status data table */ if (!(istptr = InstanceStatusTablePtr)) return; istptr->ExitStatus = indptr->ExitStatus; PUT_QUAD_QUAD (&indptr->ExitTime64, &istptr->ExitTime64); /* don't complicate it further by using the DLM during an error exit */ if (VMSnok (ExitStatus)) { indptr->SupervisorUpdate = true; return; } /* always check the DLM distribution when developing */ #if !defined(WATCH_MOD) || !(WATCH_MOD) if (HttpdGblSecPtr->InstanceClusterCurrent > HttpdGblSecPtr->InstanceNodeCurrent) #endif { /* use the DLM if clustered instances */ status = InstanceNotifyWait (INSTANCE_CLUSTER_STATUS, istptr, INSTANCE_STATUS_UPDATE_WAIT_SECS); if (VMSnok (status)) indptr->SupervisorUpdate = true; } } /*****************************************************************************/ /* Accounting mutex MUST be held before calling this function. Find the entry for the node::process-name supplied. When found return a pointer to that entry. If not found create a new entry and return a pointer to that, or NULL to indicate the table is full. */ INSTANCE_STATUS* InstanceStatusFind (char *InstanceName) { int idx, idx0, max; INSTANCE_STATUS *istptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceStatusFind() \"!AZ\"", InstanceName); max = AccountingPtr->InstanceStatusTableCount; for (idx = idx0 = 0; idx < max; idx++) { istptr = &AccountingPtr->InstanceStatusTable[idx]; if (QUAD_ZERO(istptr->UpdateTime64)) { /* this is a purged entry so note it */ idx0 = idx; continue; } if (!MATCH8 (istptr->InstanceName, InstanceName)) continue; if (!strcmp (istptr->InstanceName+8, InstanceName+8)) break; } if (idx >= max) { /* instance not found */ if (!idx0 && max >= INSTANCE_STATUS_TABLE_MAX) { /* hmmm, all consumed! */ ErrorNoticed (NULL, SS$_BUFFEROVF, NULL, FI_LI); return (NULL); } /* redeploy a purged entry or add a new one */ if (idx0) idx = idx0; else idx = AccountingPtr->InstanceStatusTableCount++; istptr = &AccountingPtr->InstanceStatusTable[idx]; strcpy (istptr->InstanceName, InstanceName); } return (istptr); } /*****************************************************************************/ /* Remove stale entries from instance status table. */ void InstanceStatusPurge () { static ulong LibDeltaMins = LIB$K_DELTA_MINUTES; int idx, max, status; ulong MinsAgo; ulong AgoTime64 [2]; INSTANCE_STATUS *istptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceStatusPurge()"); InstanceMutexLock (INSTANCE_MUTEX_HTTPD); max = AccountingPtr->InstanceStatusTableCount; for (idx = 0; idx < max; idx++) { istptr = &AccountingPtr->InstanceStatusTable[idx]; status = lib$sub_times (&HttpdTime64, &istptr->UpdateTime64, &AgoTime64); if (VMSok (status)) status = lib$cvt_from_internal_time (&LibDeltaMins, &MinsAgo, &AgoTime64); if (VMSnok (status)) { ErrorNoticed (NULL, status, NULL, FI_LI); break; } if (MinsAgo >= INSTANCE_STATUS_STALE_MINS) memset (istptr, 0, sizeof(INSTANCE_STATUS)); } InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); } /*****************************************************************************/ /* Zero all instance status data in the global common forcing it to be repopulated by the various instances' status processing. May take a minute or two before it starts to look "normal" again. */ void InstanceStatusReset () { INSTANCE_NODE_DATA *indptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceStatusReset()"); indptr = &AccountingPtr->InstanceNodeData[InstanceNumber]; InstanceMutexLock (INSTANCE_MUTEX_HTTPD); if (AccountingPtr->InstanceStatusTableCount) { AccountingPtr->InstanceStatusTableCount = 0; memset (&AccountingPtr->InstanceStatusTable, 0, sizeof(AccountingPtr->InstanceStatusTable)); memset (&indptr->RequestCount, 0, sizeof(indptr->RequestCount)); } InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); InstanceStatusTablePtr = NULL; } /*****************************************************************************/ /* Provide report element for inclusion in the Server Admin page. */ void InstanceStatusAdminReport (REQUEST_STRUCT *rqptr) { static ulong LibDeltaMins = LIB$K_DELTA_MINUTES; int idx, len, maxtab, maxlen, status, total; ulong MinsAgo; ulong AgoTime64 [2]; char *stptr; char number [16], ExitAgoBuf [16], ExitStatusBuf [16], StartAgoBuf [16], TimeExit [32], TimeStartup [32], UpdateAgoBuf [32]; INSTANCE_STATUS *istptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceStatusAdminReport()"); /* need that 64 byte lock value block */ if (SysInfo.LockValueBlockSize != LOCK_VALUE_BLOCK_64) { status = FaolToNet (rqptr, "\n", NULL); return; } InstanceMutexLock (INSTANCE_MUTEX_HTTPD); maxtab = AccountingPtr->InstanceStatusTableCount; for (idx = total = 0; idx < maxtab; idx++) { istptr = &AccountingPtr->InstanceStatusTable[idx]; if (QUAD_ZERO(istptr->UpdateTime64)) continue; total++; } if (total <= 1) { /* nothing to see here! */ InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); return; } status = FaolToNet (rqptr, "\n\ \n\ \ \ \ \n", NULL); if (VMSnok (status)) ErrorNoticed (rqptr, status, NULL, FI_LI); maxtab = AccountingPtr->InstanceStatusTableCount; for (idx = total = 0; idx < maxtab; idx++) { istptr = &AccountingPtr->InstanceStatusTable[idx]; /* if a purged entry */ if (QUAD_ZERO(istptr->UpdateTime64)) continue; ThisLongAgo (&istptr->ExitTime64, ExitAgoBuf); ThisLongAgo (&istptr->StartTime64, StartAgoBuf); ThisLongAgo (&istptr->UpdateTime64, UpdateAgoBuf); TimeSansYear (&istptr->StartTime64, TimeStartup); TimeSansYear (&istptr->ExitTime64, TimeExit); status = lib$sub_times (&HttpdTime64, &istptr->UpdateTime64, &AgoTime64); if (VMSok (status)) status = lib$cvt_from_internal_time (&LibDeltaMins, &MinsAgo, &AgoTime64); if (VMSok(status) && MinsAgo > INSTANCE_STATUS_STALE_MINS) { stptr = " style=\"text-decoration:line-through\""; number[0] = '\0'; } else { stptr = ""; sprintf (number, "%d", ++total); } if (istptr->ExitStatus) FaoToBuffer (ExitStatusBuf, sizeof(ExitStatusBuf), NULL, "%X!8XL", istptr->ExitStatus); else ExitStatusBuf[0] = '\0'; FaoToNet (rqptr, "\ \ \ \ \ \n", stptr, number, istptr->InstanceName, UpdateAgoBuf, TimeStartup, StartAgoBuf, istptr->StartupCount, TimeExit, ExitAgoBuf, ExitStatusBuf, istptr->MinuteCount, istptr->HourCount); } InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); status = FaolToNet (rqptr, "
InstanceAgoStartedAgoCountExitedAgoStatus/Min/Hour
!AZ!AZ!AZ!AZ!AZ!UL!AZ!AZ!AZ!UL!UL
\n\ \n", NULL); if (VMSnok (status)) ErrorNoticed (rqptr, status, NULL, FI_LI); } /*****************************************************************************/ /* Report the instance table to . Adjusts report depending on line width 80 or 132. */ void InstanceStatusCliReport (REQUEST_STRUCT *rqptr) { static ulong LibDeltaMins = LIB$K_DELTA_MINUTES; static $DESCRIPTOR (ttDsc, "TT:"); static ulong LineLength; static struct { short BufferLength; short ItemCode; void *BufferPtr; void *LengthPtr; } DevBufSizItemList [] = { { sizeof(LineLength), DVI$_DEVBUFSIZ, &LineLength, 0 }, { 0, 0, 0, 0 } }; BOOL stale; int idx, len, maxtab, maxlen, status, total; ulong MinsAgo; ulong AgoTime64 [2], NowTime64 [2]; char *cptr; char buf [512], number [16], ExitAgoBuf [16], ExitStatusBuf [16], StartAgoBuf [16], TimeExit [32], TimeStartup [32], TmpAgoBuf [16], UpdateAgoBuf [16]; IO_SB IOsb; INSTANCE_STATUS *istptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceStatusCliReport()"); if (rqptr) { if (SysInfo.LockValueBlockSize != LOCK_VALUE_BLOCK_64) { ErrorVmsStatus (rqptr, SS$_UNSUPPORTED, FI_LI); AdminEnd (rqptr); } ResponseHeader (rqptr, 200, "text/plain", -1, NULL, NULL); } else { if (SysInfo.LockValueBlockSize != LOCK_VALUE_BLOCK_64) exit (SS$_NOSUCHREPORT); status = HttpdGblSecMap (); if (VMSnok (status)) exit (status); } status = sys$getdviw (EfnWait, 0, &ttDsc, &DevBufSizItemList, &IOsb, 0, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSnok (status)) exit (status); sys$gettim (&NowTime64); InstanceMutexLock (INSTANCE_MUTEX_HTTPD); maxtab = AccountingPtr->InstanceStatusTableCount; for (idx = maxlen = 0; idx < maxtab; idx++) { istptr = &AccountingPtr->InstanceStatusTable[idx]; if ((len = strlen(istptr->InstanceName)) > maxlen) maxlen = len; } if (maxlen < 8) maxlen = 8; if (LineLength > 80) FaoToBuffer (buf, sizeof(buf), NULL, "\ !#AZ !4AZ !15AZ !4AZ !5AZ !15AZ !4AZ !10AZ !7AZ !4AZ !6AZ\n\ !#*~ !4*~ !15*~ !4*~ !5*~ !15*~ !4*~ !10*~ !7*~ !4*~ !6*~\n", maxlen, "Instance", " Ago", "Up", " Ago", "Count", "Exit", " Ago", "Status", "Version", "/Min", " /Hour", maxlen); else FaoToBuffer (buf, sizeof(buf), NULL, "\ !#AZ !4AZ !4AZ !5AZ !4AZ !10AZ !7AZ !4AZ !6AZ\n\ !#*~ !4*~ !4*~ !5*~ !4*~ !10*~ !7*~ !4*~ !6*~\n", maxlen, "Instance", " Ago", " Up", "Count", "Exit", "Status", "Version", "/Min", " /Hour", maxlen); if (rqptr) FaoToNet (rqptr, "!AZ", buf); else FaoToStdout ("!AZ", buf); for (idx = total = 0; idx < maxtab; idx++) { istptr = &AccountingPtr->InstanceStatusTable[idx]; /* if a purged entry */ if (QUAD_ZERO(istptr->UpdateTime64)) continue; /* right justify each of these (really should incorporate in FAO.C) */ ThisLongAgo (&istptr->ExitTime64, TmpAgoBuf); sprintf (ExitAgoBuf, "%4s", TmpAgoBuf); ThisLongAgo (&istptr->StartTime64, TmpAgoBuf); sprintf (StartAgoBuf, "%4s", TmpAgoBuf); ThisLongAgo (&istptr->UpdateTime64, TmpAgoBuf); sprintf (UpdateAgoBuf, "%4s", TmpAgoBuf); TimeSansYear (&istptr->StartTime64, TimeStartup); if (TimeStartup[0] == ' ') TimeStartup[0] = '0'; TimeSansYear (&istptr->ExitTime64, TimeExit); if (TimeExit[0] == ' ') TimeExit[0] = '0'; if (istptr->ExitStatus) FaoToBuffer (ExitStatusBuf, sizeof(ExitStatusBuf), NULL, "%X!8XL", istptr->ExitStatus); else ExitStatusBuf[0] = '\0'; status = lib$sub_times (&NowTime64, &istptr->UpdateTime64, &AgoTime64); if (VMSok (status)) status = lib$cvt_from_internal_time (&LibDeltaMins, &MinsAgo, &AgoTime64); if (VMSok(status) && MinsAgo > INSTANCE_STATUS_STALE_MINS) { stale = true; strcpy (number, " "); } else { stale = false; sprintf (number, "%2d", ++total); } if (LineLength > 80) FaoToBuffer (buf, sizeof(buf), NULL, "!AZ !#AZ !4AZ !15AZ !4AZ !5UL !15AZ !4AZ !10AZ !7AZ !4UL !6UL\n", number, maxlen, istptr->InstanceName, UpdateAgoBuf, TimeStartup, StartAgoBuf, istptr->StartupCount, TimeExit, ExitAgoBuf, ExitStatusBuf, istptr->HttpdVersion, istptr->MinuteCount, istptr->HourCount); else FaoToBuffer (buf, sizeof(buf), NULL, "!AZ !#AZ !4AZ !4AZ !5UL !4AZ !10AZ !7AZ !4UL !6UL\n", number, maxlen, istptr->InstanceName, UpdateAgoBuf, StartAgoBuf, istptr->StartupCount, ExitAgoBuf, ExitStatusBuf, istptr->HttpdVersion, istptr->MinuteCount, istptr->HourCount); if (stale) for (cptr = buf+4; *cptr; cptr++) if (*cptr == ' ') *cptr = '-'; if (rqptr) FaoToNet (rqptr, "!AZ", buf); else FaoToStdout ("!AZ", buf); } InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); if (maxtab) FaoToBuffer (buf, sizeof(buf), NULL, " as at !20%D\n", 0); else FaoToBuffer (buf, sizeof(buf), NULL, " 0 as at !20%D\n", 0); if (rqptr) { FaoToNet (rqptr, "!AZ", buf); AdminEnd (rqptr); } else FaoToStdout ("!AZ", buf); } /*****************************************************************************/ /* Proactively dequeue all locks. I would have thought image exit would have done this "quickly enough", but it appears as if there are still sufficient locks when a move of supervisor role occurs to defeat the logic in the restart and create process code! Perhaps this only occurs with the '$DELPRC(0,0)' and it takes a while for the DLM to catch up? */ void InstanceExit () { int idx, status; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceExit()"); /* unlock any instance-locked mutexes */ for (idx = 1; idx <= INSTANCE_MUTEX_COUNT; idx++) { if (!InstanceMutexHeld[idx]) continue; _BBCCI (0, &HttpdGblSecPtr->Mutex[idx]); } /*** sys$setprv (1, &SysLckMask, 0, 0); for (idx = 1; idx <= INSTANCE_LOCK_COUNT; idx++) sys$deq (InstanceLockTable[idx].Lksb.lksb$l_lkid, 0, 0, 0); sys$setprv (0, &SysLckMask, 0, 0); ***/ } /*****************************************************************************/ /* Set the server process name. If multiple instances have been configured for step through the process names available breaking at the first successful. This becomes the "instance" name of this particular process on the node. */ InstanceProcessName () { static $DESCRIPTOR (LogNameDsc, "WASD_PROCESS_NAME"); static $DESCRIPTOR (LnmFileDevDsc, "LNM$FILE_DEV"); static char NameBuffer [16]; static VMS_ITEM_LIST3 NameLnmItem [] = { { sizeof(NameBuffer), LNM$_STRING, NameBuffer, 0 }, { 0,0,0,0 } }; int idx, status; ushort Length; $DESCRIPTOR (PrcNamDsc, HttpdProcess.PrcNam); /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceProcessName() !UL", InstanceNodeConfig); status = sys$trnlnm (0, &LnmFileDevDsc, &LogNameDsc, 0, &NameLnmItem); if (VMSok(status)) if (NameBuffer[0] == '0' || TOUP(NameBuffer[0]) == 'F') InstanceWasdName = false; for (idx = InstanceNodeConfig > 1 ? 1 : 0; idx < INSTANCE_MAX; idx++) { status = FaoToBuffer (HttpdProcess.PrcNam, sizeof(HttpdProcess.PrcNam), &Length, "!AZ!AZ!AZ:!UL", InstanceGroupChars[InstanceEnvNumber], InstanceWasdName ? "WASD" : "HTTP", InstanceWasdName ? InstanceWasdChars[idx] : InstanceHttpChars[idx], ServerPort); if (VMSnok (status) || status == SS$_BUFFEROVF) ErrorExitVmsStatus (status, NULL, FI_LI); if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchDataFormatted ("!&Z\n", HttpdProcess.PrcNam); PrcNamDsc.dsc$w_length = HttpdProcess.PrcNamLength = Length; if (VMSok (status = sys$setprn (&PrcNamDsc))) break; } if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$setprn()", FI_LI); FaoToStdout ("%HTTPD-I-INSTANCE, process name !AZ\n", HttpdProcess.PrcNam); InstanceNumber = idx; } /*****************************************************************************/ /* The "administration socket" is used to to connect exclusively to a single instance (normally connects are distributed between instances). This function distributes the IP port (in decimal) across the cluster via InstanceSocketForAdmin(). Creates a lock resource with a name based on the process name and stores in it's lock value block the number (in ASCII as always) of it's "internal", per-instance (process) admininstration port. */ int InstanceSocketAdmin (short IpPort) { int enqfl, status, NameLength; char *cptr, *sptr, *zptr; IO_SB IOsb; INSTANCE_LOCK *ilptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceSocketAdmin() !UL", IpPort); ilptr = &InstanceLockAdmin; /* build the (binary) resource name the admin lock */ zptr = (sptr = ilptr->Name) + sizeof(ilptr->Name)-1; for (cptr = HTTPD_NAME; *cptr && sptr < zptr; *sptr++ = *cptr++); if (sptr < zptr) *sptr++ = (char)InstanceLockNameMagic; for (cptr = SysInfo.NodeName; *cptr && sptr < zptr; *sptr++ = *cptr++); if (sptr < zptr) *sptr++ = ':'; if (sptr < zptr) *sptr++ = ':'; for (cptr = HttpdProcess.PrcNam; *cptr && sptr < zptr; *sptr++ = *cptr++); NameLength = sptr - ilptr->Name; ilptr->NameDsc.dsc$w_length = NameLength; ilptr->NameDsc.dsc$a_pointer = ilptr->Name; FaoToBuffer (&ilptr->Lksb.lksb$b_valblk, SysInfo.LockValueBlockSize, NULL, "!UL", (ushort)IpPort); /* queue at EX then convert to NL causing lock value block to be written */ sys$setprv (1, &SysLckMask, 0, 0); if (ilptr->InUse) ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); ilptr->InUse = true; status = sys$enqw (EfnWait, LCK$K_EXMODE, &ilptr->Lksb, LCK$M_NOQUEUE | LCK$M_SYSTEM, &ilptr->NameDsc, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = ilptr->Lksb.lksb$w_status; /* this just shouldn't happen */ if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); enqfl = LCK$M_VALBLK | LCK$M_CONVERT | LCK$M_SYSTEM; if (SysInfo.LockValueBlockSize == LOCK_VALUE_BLOCK_64) enqfl |= LCK$M_XVALBLK; status = sys$enqw (EfnWait, LCK$K_NLMODE, &ilptr->Lksb, enqfl, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = ilptr->Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); sys$setprv (0, &SysLckMask, 0, 0); if (status == SS$_XVALNOTVALID) { /* hmmm, change in cluster composition? whatever! go back to 16 bytes */ SysInfo.LockValueBlockSize = LOCK_VALUE_BLOCK_16; ErrorNoticed (NULL, SS$_XVALNOTVALID, ErrorXvalNotValid, FI_LI); } return (status); } /*****************************************************************************/ /* Given a process name in the format 'node::WASD:port' (e.g. "DELTA::WASD:80") generate the same lock name as InstanceSocketAdmin() and queue a NL lock, then get the lock value block using sys$getlki(). Retrieve the decimal port into the supplied pointed-to storage. Return a VMS status code. */ int InstanceSocketForAdmin ( char *ProcessName, short *IpPortPtr ) { static ulong Lki_XVALNOTVALID; static char LockName [31+1]; static struct lksb LockSb; static VMS_ITEM_LIST3 LkiItems [] = { /* careful, values are dynamically assigned in code below! */ { 0, 0, 0, 0 }, /* reserved for LKI$_[X]VALBLK item */ { 0, 0, 0, 0 }, /* reserved for LKI$_XVALNOTVALID item */ {0,0,0,0} }; static $DESCRIPTOR (LockNameDsc, LockName); int retval, status; char *cptr, *sptr, *zptr; IO_SB IOsb; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceSocketForAdmin() !&Z", ProcessName); if (SysInfo.LockValueBlockSize == LOCK_VALUE_BLOCK_64) { LkiItems[0].buf_len = LOCK_VALUE_BLOCK_64; LkiItems[0].buf_addr = &LockSb.lksb$b_valblk; LkiItems[0].item = LKI$_XVALBLK; LkiItems[1].buf_len = sizeof(Lki_XVALNOTVALID); LkiItems[1].buf_addr = &Lki_XVALNOTVALID; LkiItems[1].item = LKI$_XVALNOTVALID; } else { LkiItems[0].buf_len = LOCK_VALUE_BLOCK_16; LkiItems[0].buf_addr = &LockSb.lksb$b_valblk; LkiItems[0].item = LKI$_VALBLK; /* in this case this terminates the item list */ LkiItems[1].buf_len = 0; LkiItems[1].buf_addr = 0; LkiItems[1].item = 0; Lki_XVALNOTVALID = 0; } /* build the (binary) resource name the admin lock */ zptr = (sptr = LockName) + sizeof(LockName)-1; for (cptr = HTTPD_NAME; *cptr && sptr < zptr; *sptr++ = *cptr++); if (sptr < zptr) *sptr++ = (char)InstanceLockNameMagic; for (cptr = ProcessName; *cptr && sptr < zptr; *sptr++ = *cptr++); LockNameDsc.dsc$w_length = sptr - LockName; sys$setprv (1, &SysLckMask, 0, 0); status = sys$enqw (EfnWait, LCK$K_NLMODE, &LockSb, LCK$M_SYSTEM, &LockNameDsc, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = LockSb.lksb$w_status; if (VMSok (status)) { status = sys$getlkiw (EfnWait, &LockSb.lksb$l_lkid, &LkiItems, &IOsb, 0, 0, 0); if (VMSok (status)) status = IOsb.Status; } sys$deq (LockSb.lksb$l_lkid, 0, 0, 0); sys$setprv (0, &SysLckMask, 0, 0); if (VMSnok (status)) return (status); if (Lki_XVALNOTVALID) { /* hmmm, change in cluster composition? whatever! go back to 16 bytes */ SysInfo.LockValueBlockSize = LOCK_VALUE_BLOCK_16; ErrorNoticed (NULL, SS$_XVALNOTVALID, ErrorXvalNotValid, FI_LI); } if (IpPortPtr) *IpPortPtr = atoi(&LockSb.lksb$b_valblk); return (SS$_NORMAL); } /*****************************************************************************/ /* This function controls the creation of bound sockets, and distribution of the BG: device names, amongst per-node instances of the server. This function is called one or two times to do it's job, which is to create a per-node lock resource name containing a BINARY representation of a service IP address and port. Binary is necessary to be able to contain the 16 byte address + 2 byte port of IPv6. The resource name becomes the 5 character resource name prefix (see above), the (up to) 6 character node name the finally the 18 byte socket address, a total of 29 characters (out of a possible 31). The first call checks if this instance already has a channel to the requested socket (address/port combination). If it does (stored in a local table) it returns the BG device name with a leading underscore. If not it checks The first (and possibly second) call has 'BgDevName' as NULL and creates the resource name, enqueues a CR lock then converts it to NL which causes the lock value block to be returned. This can be checked for a string with the BG: device name (e.g. "_BG206:") of any previously created listening socket for the address and port. If such a string is found then a pointer to it is returned and it can be used to assign another channel to it. If the lock value block is empty a NULL is returned, the calling routine then creates and binds a socket, then calls this function again. This time with the 'BgDeviceName' is non-NULL and points to a string containing the device name (e.g. "_BG206:"). This is copied to the lock value block, an EX mode lock enqueued then converted back to NL to write the lock value, making it available for use by other processes. This function assumes some other overall lock prevents other processes from using this function while it is called two times (i.e. the service creation process is locked). */ char* InstanceSocket ( IPADDRESS *ipaptr, short IpPort, char *BgDevName ) { static char DeviceName [LOCK_VALUE_BLOCK_64]; int cnt, enqfl, status, SocketNameLength; char *cptr, *sptr, *zptr; char SocketName [31+1]; INSTANCE_SOCKET_LOCK *islptr; $DESCRIPTOR (NameDsc, ""); /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceSocket()"); if (BgDevName) { /*************************************/ /* socket created, store device name */ /*************************************/ islptr = &InstanceSocketTable[InstanceSocketCount]; /* store the BG device name in the lock value block */ sptr = islptr->Lksb.lksb$b_valblk; zptr = sptr + sizeof(islptr->Lksb.lksb$b_valblk)-1; for (cptr = BgDevName; *cptr && sptr < zptr; *sptr++ = *cptr++); *sptr = '\0'; enqfl = LCK$M_VALBLK | LCK$M_CONVERT | LCK$M_SYSTEM; if (SysInfo.LockValueBlockSize == LOCK_VALUE_BLOCK_64) enqfl |= LCK$M_XVALBLK; sys$setprv (1, &SysLckMask, 0, 0); /* convert NL to EX then back to NL, lock value block is written */ status = sys$enqw (EfnWait, LCK$K_EXMODE, &islptr->Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = islptr->Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); status = sys$enqw (EfnWait, LCK$K_NLMODE, &islptr->Lksb, enqfl, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = islptr->Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); sys$setprv (0, &SysLckMask, 0, 0); if (status == SS$_XVALNOTVALID) { /* hmmm, change in cluster composition? whatever! back to 16 bytes */ SysInfo.LockValueBlockSize = LOCK_VALUE_BLOCK_16; ErrorNoticed (NULL, SS$_XVALNOTVALID, ErrorXvalNotValid, FI_LI); } InstanceSocketCount++; return (NULL); } /***************************************/ /* build the socket lock resource name */ /***************************************/ zptr = (sptr = SocketName) + sizeof(SocketName)-1; for (cptr = HTTPD_NAME; *cptr && sptr < zptr; *sptr++ = *cptr++); if (sptr < zptr) *sptr++ = (char)InstanceLockNameMagic; cptr = SysInfo.NodeName; while (*cptr && sptr < zptr) *sptr++ = *cptr++; if (sptr < zptr) { if (IPADDRESS_IS_V4(ipaptr)) *sptr++ = (char)INSTANCE_NODE_SOCKIP4; else *sptr++ = (char)INSTANCE_NODE_SOCKIP6; } cnt = IPADDRESS_SIZE(ipaptr); cptr = IPADDRESS_ADR46(ipaptr); while (cnt-- && sptr < zptr) *sptr++ = *cptr++; cnt = sizeof(short); cptr = (char*)&IpPort; while (cnt-- && sptr < zptr) *sptr++ = *cptr++; if (sptr >= zptr) ErrorExitVmsStatus (0, ErrorSanityCheck, FI_LI); *sptr = '\0'; /* not really necessary */ SocketNameLength = sptr - SocketName; if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchDataDump (SocketName, SocketNameLength); /**************************************************************/ /* check if this instance already has a channel to the socket */ /**************************************************************/ for (cnt = 0; cnt < InstanceSocketCount; cnt++) { islptr = &InstanceSocketTable[cnt]; if (MATCH0 (islptr->Name, SocketName, SocketNameLength)) break; } if (cnt >= InstanceSocketCount) islptr = NULL; if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "!&?YES\rNO\r", islptr); if (islptr) { /* yes it has! */ zptr = (sptr = DeviceName) + sizeof(DeviceName)-1; cptr = &islptr->Lksb.lksb$b_valblk; if (*cptr == '_') cptr++; *sptr++ = '_'; while (*cptr && sptr < zptr) *sptr++ = *cptr++; *sptr = '\0'; /* return with a leading underscore */ return (DeviceName); } /**************************************************/ /* check if another instance has bound the socket */ /**************************************************/ islptr = &InstanceSocketTable[InstanceSocketCount]; memcpy (islptr->Name, SocketName, SocketNameLength+1); NameDsc.dsc$w_length = SocketNameLength; NameDsc.dsc$a_pointer = islptr->Name; sys$setprv (1, &SysLckMask, 0, 0); /* this is the basic place-holding, resource instantiating lock */ status = sys$enqw (EfnWait, LCK$K_NLMODE, &islptr->Lksb, LCK$M_EXPEDITE | LCK$M_SYSTEM, &NameDsc, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = islptr->Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); enqfl = LCK$M_VALBLK | LCK$M_CONVERT | LCK$M_SYSTEM; if (SysInfo.LockValueBlockSize == LOCK_VALUE_BLOCK_64) enqfl |= LCK$M_XVALBLK; /* convert NL to CR then back to NL, the lock value block is returned */ status = sys$enqw (EfnWait, LCK$K_CRMODE, &islptr->Lksb, enqfl, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = islptr->Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); if (status == SS$_XVALNOTVALID) { /* hmmm, change in cluster composition? whatever! go back to 16 bytes */ SysInfo.LockValueBlockSize = LOCK_VALUE_BLOCK_16; ErrorNoticed (NULL, SS$_XVALNOTVALID, ErrorXvalNotValid, FI_LI); } /* back to NL mode */ status = sys$enqw (EfnWait, LCK$K_NLMODE, &islptr->Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = islptr->Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); sys$setprv (0, &SysLckMask, 0, 0); if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "!&Z", islptr->Lksb.lksb$b_valblk); if (islptr->Lksb.lksb$b_valblk[0]) { /* yes it has! lock value block contains a BG: device name string */ InstanceSocketCount++; /* return without a leading underscore */ return (islptr->Lksb.lksb$b_valblk); } /* no BG: device name string, socket will need to be created */ return (NULL); } /*****************************************************************************/ /* Lock the server notification functionality against any concurrent usage. Write the PID of the initiating process into the value block of the CONTROL lock. This can be used for log and audit purposes on other nodes, etc. */ int InstanceLockNotify () { static int LockIndex = INSTANCE_CLUSTER_NOTIFY; static ulong JpiPid = 0; static char PidBuf [8+1]; static VMS_ITEM_LIST3 JpiItems [] = { { 0,0,0,0 } }; int enqfl, status; IO_SB IOsb; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceLockNotify() !UL !&Z", LockIndex, InstanceLockTable[LockIndex].Name); if (InstanceLockTable[LockIndex].InUse) return (SS$_NOTQUEUED); if (!JpiPid) { status = sys$getjpiw (EfnWait, &JpiPid, 0, &JpiItems, &IOsb, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSnok (status)) return (status); status = FaoToBuffer (PidBuf, sizeof(PidBuf), NULL, "!8XL", JpiPid); if (VMSnok (status)) return (status); } /* store the PID in the lock status block */ memcpy (&InstanceLockTable[LockIndex].Lksb.lksb$b_valblk, PidBuf, sizeof(PidBuf)); enqfl = LCK$M_VALBLK | LCK$M_CONVERT | LCK$M_SYSTEM; if (SysInfo.LockValueBlockSize == LOCK_VALUE_BLOCK_64) enqfl |= LCK$M_XVALBLK; sys$setprv (1, &SysLckMask, 0, 0); /* convert to EX then to PW causing lock value block to be written */ status = sys$enqw (EfnWait, LCK$K_EXMODE, &InstanceLockTable[LockIndex].Lksb, LCK$M_NOQUEUE | LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = InstanceLockTable[LockIndex].Lksb.lksb$w_status; if (VMSok (status)) { status = sys$enqw (EfnWait, LCK$K_PWMODE, &InstanceLockTable[LockIndex].Lksb, enqfl, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) { status = InstanceLockTable[LockIndex].Lksb.lksb$w_status; if (VMSok (status)) InstanceLockTable[LockIndex].InUse = true; else ErrorNoticed (NULL, status, "sys$enqw", FI_LI); } else { ErrorNoticed (NULL, status, "sys$enqw", FI_LI); status = sys$enqw (EfnWait, LCK$K_NLMODE, &InstanceLockTable[LockIndex].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); if (VMSok (status)) status = InstanceLockTable[LockIndex].Lksb.lksb$w_status; else ErrorNoticed (NULL, status, "sys$enqw", FI_LI); } } else if (status != SS$_NOTQUEUED) ErrorNoticed (NULL, status, "sys$enqw", FI_LI); sys$setprv (0, &SysLckMask, 0, 0); if (status == SS$_XVALNOTVALID) { /* hmmm, change in cluster composition? whatever! go back to 16 bytes */ SysInfo.LockValueBlockSize = LOCK_VALUE_BLOCK_16; ErrorNoticed (NULL, SS$_XVALNOTVALID, ErrorXvalNotValid, FI_LI); } if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "!&S", status); return (status); } /*****************************************************************************/ /* Take out an EX lock on the specified resource. If it cannot be immediately granted then do not queue, immediately return with an indicative status. */ int InstanceLockNoWait (int LockIndex) { int status; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceLockNoWait() !&B !UL !31&H", LockIndex <= INSTANCE_CLUSTER_LOCK_COUNT || InstanceNodeConfig > 1, LockIndex, InstanceLockTable[LockIndex].Name); if (InstanceLockTable[LockIndex].InUse) return (SS$_NOTQUEUED); if (LockIndex > INSTANCE_CLUSTER_LOCK_COUNT && InstanceNodeConfig <= 1) { /* a node-only lock is being requested and not multiple instances */ InstanceLockTable[LockIndex].InUse = true; return (SS$_NORMAL); } sys$setprv (1, &SysLckMask, 0, 0); status = sys$enqw (EfnWait, LCK$K_EXMODE, &InstanceLockTable[LockIndex].Lksb, LCK$M_NOQUEUE | LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); sys$setprv (0, &SysLckMask, 0, 0); if (VMSok (status)) status = InstanceLockTable[LockIndex].Lksb.lksb$w_status; if (VMSok (status)) InstanceLockTable[LockIndex].InUse = true; return (status); } /*****************************************************************************/ /* Unlock the server control functionality. */ InstanceUnLockNotify () { int status; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceUnLockNotify()"); if (VMSnok (status = InstanceUnLock (INSTANCE_CLUSTER_NOTIFY))) ErrorExitVmsStatus (status, "InstanceUnLockNotify()", FI_LI); } /*****************************************************************************/ /* Take out a EX lock on the specified resource. Wait until it is granted. InstanceLock(), InstanceLockNoWait() and InstanceUnLock() attempt to improve performance by avoiding the use of the DLM where possible. The DLM does not need to be used when its a node-only lock (not for a cluster-wide resource) and when there is only the one instance executing on a node. When this is the case the serialization is performed by AST deliver level and the '.InUse' flags. */ int InstanceLock (int LockIndex) { int status; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceLock() !&B !UL !31&H", LockIndex <= INSTANCE_CLUSTER_LOCK_COUNT || InstanceNodeConfig > 1, LockIndex, InstanceLockTable[LockIndex].Name); if (InstanceLockTable[LockIndex].InUse) return (SS$_BUGCHECK); if (LockIndex > INSTANCE_CLUSTER_LOCK_COUNT && InstanceNodeConfig <= 1) { /* a node-only lock is being requested and not multiple instances */ InstanceLockTable[LockIndex].InUse = true; return (SS$_NORMAL); } sys$setprv (1, &SysLckMask, 0, 0); status = sys$enqw (EfnWait, LCK$K_EXMODE, &InstanceLockTable[LockIndex].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); sys$setprv (0, &SysLckMask, 0, 0); if (VMSok (status)) status = InstanceLockTable[LockIndex].Lksb.lksb$w_status; if (VMSok (status)) InstanceLockTable[LockIndex].InUse = true; return (status); } /*****************************************************************************/ /* Return the specified lock to NL mode. */ int InstanceUnLock (int LockIndex) { int status; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceUnLock() !&B !UL !31&H", LockIndex <= INSTANCE_CLUSTER_LOCK_COUNT || InstanceNodeConfig > 1, LockIndex, InstanceLockTable[LockIndex].Name); if (!InstanceLockTable[LockIndex].InUse) return (SS$_BUGCHECK); if (LockIndex > INSTANCE_CLUSTER_LOCK_COUNT && InstanceNodeConfig <= 1) { /* a node-only lock is being requested and not multiple instances */ InstanceLockTable[LockIndex].InUse = false; return (SS$_NORMAL); } sys$setprv (1, &SysLckMask, 0, 0); status = sys$enqw (EfnWait, LCK$K_NLMODE, &InstanceLockTable[LockIndex].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, 0, 0, 0, 2, 0); sys$setprv (0, &SysLckMask, 0, 0); if (VMSok (status)) status = InstanceLockTable[LockIndex].Lksb.lksb$w_status; if (VMSok (status)) InstanceLockTable[LockIndex].InUse = false; return (status); } /*****************************************************************************/ /* Set the longword in the shared global section pointed to by the supplied parameter. Lock global section structure if multiple per-node instances possible. This function avoids the overhead of InstanceMutexLock()/UnLock() with it's required set privilege calls, etc., for the very common action of accounting structure longword increment. See InstanceMutexLock() for a description of mutex operation. */ InstanceGblSecSetLong ( long *longptr, long value ) { int TickSecond, WaitCount, WaitHttpdTickSecond; ulong Time64 [2]; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceGblSecSetLong()"); /* if multiple per-node instances not possible */ if (InstanceNodeConfig <= 1) { *longptr = value; return; } if (InstanceMutexHeld[INSTANCE_MUTEX_HTTPD]) ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); WaitCount = 0; InstanceMutexCount[INSTANCE_MUTEX_HTTPD]++; for (;;) { InstanceMutexHeld[INSTANCE_MUTEX_HTTPD] = !_BBSSI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); if (InstanceMutexHeld[INSTANCE_MUTEX_HTTPD]) { *longptr = value; _BBCCI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); InstanceMutexHeld[INSTANCE_MUTEX_HTTPD] = 0; return; } if (!WaitCount++) { InstanceMutexWaitCount[INSTANCE_MUTEX_HTTPD]++; WaitHttpdTickSecond = HttpdTickSecond + INSTANCE_MUTEX_WAIT; } if (SysInfo.AvailCpuCnt == 1) sys$resched (); sys$gettim (&Time64); TickSecond = decc$fix_time (&Time64); if (TickSecond > WaitHttpdTickSecond) break; } /* something's drastically amiss, clear the mutex peremptorily */ _BBCCI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); } /*****************************************************************************/ /* Increment the longword in the shared global section pointed to by the supplied parameter. Lock global section structure if multiple per-node instances possible. This function avoids the overhead of InstanceMutexLock()/UnLock() with it's required set privilege calls, etc., for the very common action of accounting structure longword increment. See InstanceMutexLock() for a description of mutex operation. */ InstanceGblSecIncrLong (long *longptr) { int TickSecond, WaitCount, WaitHttpdTickSecond; ulong Time64 [2]; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceGblSecIncrLong()"); /* if multiple per-node instances not possible */ if (InstanceNodeConfig <= 1) { *longptr = *longptr + 1; return; } if (InstanceMutexHeld[INSTANCE_MUTEX_HTTPD]) ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); WaitCount = 0; InstanceMutexCount[INSTANCE_MUTEX_HTTPD]++; for (;;) { InstanceMutexHeld[INSTANCE_MUTEX_HTTPD] = !_BBSSI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); if (InstanceMutexHeld[INSTANCE_MUTEX_HTTPD]) { *longptr = *longptr + 1; _BBCCI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); InstanceMutexHeld[INSTANCE_MUTEX_HTTPD] = 0; return; } if (!WaitCount++) { InstanceMutexWaitCount[INSTANCE_MUTEX_HTTPD]++; WaitHttpdTickSecond = HttpdTickSecond + INSTANCE_MUTEX_WAIT; } if (SysInfo.AvailCpuCnt == 1) sys$resched (); sys$gettim (&Time64); TickSecond = decc$fix_time (&Time64); if (TickSecond > WaitHttpdTickSecond) break; } /* something's drastically amiss, clear the mutex peremptorily */ _BBCCI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); } /*****************************************************************************/ /* Same as InstanceGblSecIncrLong() except it decrements the longword if non-zero. See InstanceMutexLock() for a description of mutex operation. */ InstanceGblSecDecrLong (long *longptr) { int TickSecond, WaitCount, WaitHttpdTickSecond; ulong Time64 [2]; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceGblSecDecrLong()"); /* if multiple per-node instances not possible */ if (InstanceNodeConfig <= 1) { if (*longptr) *longptr = *longptr - 1; return; } if (InstanceMutexHeld[INSTANCE_MUTEX_HTTPD]) ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); WaitCount = 0; InstanceMutexCount[INSTANCE_MUTEX_HTTPD]++; for (;;) { InstanceMutexHeld[INSTANCE_MUTEX_HTTPD] = !_BBSSI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); if (InstanceMutexHeld[INSTANCE_MUTEX_HTTPD]) { if (*longptr) *longptr = *longptr - 1; _BBCCI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); InstanceMutexHeld[INSTANCE_MUTEX_HTTPD] = 0; return; } if (!WaitCount++) { InstanceMutexWaitCount[INSTANCE_MUTEX_HTTPD]++; WaitHttpdTickSecond = HttpdTickSecond + INSTANCE_MUTEX_WAIT; } if (SysInfo.AvailCpuCnt == 1) sys$resched (); sys$gettim (&Time64); TickSecond = decc$fix_time (&Time64); if (TickSecond > WaitHttpdTickSecond) break; } /* something's drastically amiss, clear the mutex peremptorily */ _BBCCI (0, &HttpdGblSecPtr->Mutex[INSTANCE_MUTEX_HTTPD]); ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); } /*****************************************************************************/ /* Take out a mutex (lock) on the global section. There is a small chance that an instance will crash or be stopped while the mutex is held. InstanceExit() should reset the mutex if currently held. A sanity checks causes the instance to exit if the mutex is held for more than the defined period. Of course as with all indeterminate shared access there are small critical code sections and chances of race conditions here. Worst-case is a mutex being taken out and not released because of process STOPing, though there should not be infinite loops or waits, the sanity check should cause an exit. There is also a small chance that the instance may have released the mutex but still have the flag set that it holds it. This might result in the mutex being "released" (zeroed) while some other instance legitimately holds it. All-in-all such uncoordinated access to the global section might result in minor data corruption (accounting accumulators), but nothing disasterous. On a multi-CPU system this algorithm might cause the waiting instance to "spin" a little (i.e. uselessly consume CPU cycles). It assumes the blocking instance will be scheduled and processing on some other CPU :^) If this "hangs" at AST delivery level then 'HttpdTickSecond' will have stopped ticking. Generate our own ticks here. I'm assuming that this mutex approach is more light-weight than using the DLM. */ InstanceMutexLock (int MutexNumber) { int TickSecond, WaitCount, WaitHttpdTickSecond; ulong Time64 [2]; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceMutexLock() !UL", MutexNumber); if (InstanceNodeConfig <= 1) return; if (MutexNumber <= 0 || MutexNumber > INSTANCE_MUTEX_COUNT || InstanceMutexHeld[MutexNumber]) { char String [256]; sprintf (String, "%s (mutex %d)", ErrorSanityCheck, MutexNumber); ErrorExitVmsStatus (SS$_BUGCHECK, String, FI_LI); } WaitCount = 0; InstanceMutexCount[MutexNumber]++; for (;;) { InstanceMutexHeld[MutexNumber] = !_BBSSI (0, &HttpdGblSecPtr->Mutex[MutexNumber]); if (InstanceMutexHeld[MutexNumber]) return; if (!WaitCount++) { InstanceMutexWaitCount[MutexNumber]++; WaitHttpdTickSecond = HttpdTickSecond + INSTANCE_MUTEX_WAIT; } if (SysInfo.AvailCpuCnt == 1) sys$resched (); sys$gettim (&Time64); TickSecond = decc$fix_time (&Time64); if (TickSecond > WaitHttpdTickSecond) break; } /* something's drastically amiss, clear the mutex peremptorily */ _BBCCI (0, &HttpdGblSecPtr->Mutex[MutexNumber]); ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); } /*****************************************************************************/ /* Reset the mutex taken out on the global section. See InstanceMutexLock() for a description of mutex operation. */ InstanceMutexUnLock (int MutexNumber) { int status; char String [256]; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceMutexUnLock() !UL", MutexNumber); if (InstanceNodeConfig <= 1) return; if (MutexNumber >= 1 && MutexNumber <= INSTANCE_MUTEX_COUNT && InstanceMutexHeld[MutexNumber]) { _BBCCI (0, &HttpdGblSecPtr->Mutex[MutexNumber]); InstanceMutexHeld[MutexNumber] = 0; return; } /* something's drastically amiss, clear the mutex peremptorily */ if (InstanceMutexHeld[MutexNumber]) _BBCCI (0, &HttpdGblSecPtr->Mutex[MutexNumber]); sprintf (String, "%s (mutex %d)", ErrorSanityCheck, MutexNumber); ErrorExitVmsStatus (SS$_BUGCHECK, String, FI_LI); } /*****************************************************************************/ /* This function establishes a DLM based mechanism for registering interest in receiving notifications of "events" across all node and/or cluster instances (depending on the resource name involved) of servers. When called it "registers interest" in the associated resource name and when InstanceNotifyNow() is used the callback AST is activated and the lock status value block used to transfer data to that AST. Enqueues a CR (concurrent read) lock on the specified resource. This allows a "blocking" AST to be delivered (back to this function, the two states are differentiated by setting the most significant bit of 'LockIndex' for the AST call), indicating another instance somewhere (using InstanceNotifyNow()) is wishing to initiate a distributed action, by enqueing an EX (exclusive) lock for the same resource. Release the CR lock then immediately enqueue another CR so that the lock value block subsequently written to by the initiating EX mode lock is read via the specified AST function. (Note the 'AstFunction' parameter is only accessed during non-AST processing and so is not a *real* issue - except for purists ;^) The '..' lock status block and AST function are used here because it is being set up to last the life of the server. */ int InstanceNotifySet ( int LockIndex, CALL_BACK AstFunction ) { int enqfl, status; char *cptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceNotifySet() !&F !&X !&Z !&A", &InstanceNotifySet, LockIndex, InstanceLockTable[LockIndex&0x7fffffff].Name, LockIndex&0x80000000 ? 0 : AstFunction); sys$setprv (1, &SysLckMask, 0, 0); if (LockIndex & 0x80000000) { enqfl = LCK$M_VALBLK | LCK$M_QUECVT | LCK$M_CONVERT | LCK$M_SYSTEM; if (SysInfo.LockValueBlockSize == LOCK_VALUE_BLOCK_64) enqfl |= LCK$M_XVALBLK; /* mask out the bit that indicates it's an AST */ LockIndex &= 0x7fffffff; if (!InstanceLockTable[LockIndex].InUse) ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); /* convert (wait) current CR mode lock to NL unblocking the queue */ status = sys$enqw (EfnWait, LCK$K_NLMODE, &InstanceLockTable[LockIndex].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, LockIndex|0x80000000, 0, 0, 2, 0); if (VMSok (status)) status = InstanceLockTable[LockIndex].Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); /* convert (nowait) back to CR to block queue against next EX mode */ status = sys$enq (EfnNoWait, LCK$K_CRMODE, &InstanceLockTable[LockIndex].Lksb, enqfl, 0, 0, &InstanceNotifySetAst, LockIndex|0x80000000, &InstanceNotifySet, 0, 2, 0); if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enq()", FI_LI); if (status == SS$_XVALNOTVALID) { /* hmmm, change in cluster composition? whatever! back to 16 bytes */ SysInfo.LockValueBlockSize = LOCK_VALUE_BLOCK_16; ErrorNoticed (NULL, SS$_XVALNOTVALID, ErrorXvalNotValid, FI_LI); } } else if (AstFunction) { /* initial call */ if (InstanceLockTable[LockIndex].InUse) ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); InstanceLockTable[LockIndex].InUse = true; InstanceLockTable[LockIndex].AstFunction = AstFunction; /* convert (wait) to CR to block the queue against EX mode */ status = sys$enqw (EfnWait, LCK$K_CRMODE, &InstanceLockTable[LockIndex].Lksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, 0, LockIndex|0x80000000, &InstanceNotifySet, 0, 2, 0); if (VMSok (status)) status = InstanceLockTable[LockIndex].Lksb.lksb$w_status; if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enqw()", FI_LI); } else ErrorExitVmsStatus (SS$_BUGCHECK, ErrorSanityCheck, FI_LI); sys$setprv (0, &SysLckMask, 0, 0); return (status); } /*****************************************************************************/ /* This function abstracts away the actual lock status block containing the data being delivered by calling the AST with a pointer to the one in use. */ InstanceNotifySetAst (int LockIndex) { /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceNotifySetAst() !&F !&X !&Z", &InstanceNotifySetAst, LockIndex, InstanceLockTable[LockIndex].Name); /* mask out the bit that indicates it's an AST */ LockIndex &= 0x7fffffff; /* invoke the AST function, address of the lock status block parameter */ (*InstanceLockTable[LockIndex].AstFunction) (&InstanceLockTable[LockIndex].Lksb); } /*****************************************************************************/ /* After InstanceNotifySet() has "registered interest" in a particular resource name this function may be used to notify and deliver either, 15 (16) bytes for pre-V8.2 VMS, or 63 (64) bytes for non-VAX V8.2 and later VMS, of data in the lock status block to the callback AST function specified when InstanceNotifySet() was originally called. The 15/63 bytes of data can be anything including a null-terminated string, only the first 15/63 bytes are used of any parameter supplied. As it's generally assumed to be a string the 16/64th byte is always set to a null character (for when the string has been truncated). This function is explicitly called to initiate the notify, queuing an EXMODE lock containing a lock value block, and is also called by itself as an AST to dequeue the EXMODE lock causing the lock value block to be written to all participating in the resource. The two states are differentiated by setting the most significant bit of 'LockIndex' for the AST call. There is a third behaviour performed. If 'LockIndex' is zero the value of the lock ID is returned as a boolean. If zero then the enqueuing has concluded. If non-zero (i.e. a lock ID) then the enqueuing is not complete. Used by polling from ControlCommand(). This function uses it's own internal, static lock status block, and is used infrequently enough that the full enqueue/dequeue does not pose any performance issue. The "queue" then "convert" is required due to the conversion queue (and the locks being converted in InstanceNotifySet()) having priority over the waiting queue (VMS Progamming Concepts diagram). */ int InstanceNotifyNow ( int LockIndex, void *ValuePtr ) { static uchar ValueBlock [LOCK_VALUE_BLOCK_64]; static struct lksb NotifyLksb; int deqfl, status; char *cptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceNotifyNow() !&F !&X !&B !&Z !&Z", &InstanceNotifyNow, LockIndex, !QUAD_ZERO (&NotifyLksb.lksb$l_lkid), InstanceLockTable[LockIndex&0x7fffffff].Name, LockIndex&0x80000000 ? ValueBlock : ValuePtr); /* just polling the progress of the lock enqueue */ if (!LockIndex) return (!QUAD_ZERO (&NotifyLksb.lksb$l_lkid)); sys$setprv (1, &SysLckMask, 0, 0); if (LockIndex & 0x80000000) { if (SysInfo.LockValueBlockSize == LOCK_VALUE_BLOCK_64) deqfl = LCK$M_XVALBLK; else deqfl = 0; /* dequeue the EX mode lock writing the value block */ status = sys$deq (NotifyLksb.lksb$l_lkid, ValueBlock, 0, deqfl); if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$deq()", FI_LI); memset (&NotifyLksb, 0, sizeof(struct lksb)); memset (ValueBlock, 0, sizeof(ValueBlock)); InstanceUnLockNotify (); } else if (QUAD_ZERO (&NotifyLksb.lksb$l_lkid)) { if (ValuePtr) memcpy (ValueBlock, ValuePtr, SysInfo.LockValueBlockSize); /* queue (wait) a CR mode lock */ status = sys$enqw (EfnWait, LCK$K_CRMODE, &NotifyLksb, LCK$M_SYSTEM, &InstanceLockTable[LockIndex].NameDsc, 0, 0, LockIndex|0x80000000, 0, 0, 2, 0); if (VMSok (status)) { status = NotifyLksb.lksb$w_status; if (VMSok (status)) { /* convert (nowait) to EX mode */ status = sys$enq (EfnNoWait, LCK$K_EXMODE, &NotifyLksb, LCK$M_CONVERT | LCK$M_SYSTEM, 0, 0, &InstanceNotifyNow, LockIndex|0x80000000, 0, 0, 2, 0); if (VMSnok (status)) ErrorExitVmsStatus (status, "sys$enq()", FI_LI); } else ErrorNoticed (NULL, status, "sys$enqw()", FI_LI); } } else ErrorNoticed (NULL, status = SS$_BUGCHECK, ErrorSanityCheck, FI_LI); sys$setprv (0, &SysLckMask, 0, 0); return (status); } /*****************************************************************************/ /* Waiting up to |Seconds| gain the notification lock and then send the notification. After notification wait |Seconds| for it to complete. Where an AST is not in progress (/DO=..) the notification is waited on to completion or timeout. Where an AST is in progress (Server Admin) there is not wait to completion. Note that InstanceNotifyNow() actually releases the lock gained in this function as the value block is delivered. Returns a VMS status. */ int InstanceNotifyWait ( int LockIndex, void *ValuePtr, int Seconds ) { static ulong JpiAstAct; static VMS_ITEM_LIST3 JpiItems [] = { { sizeof(JpiAstAct), JPI$_ASTACT, &JpiAstAct, 0 }, { 0,0,0,0 } }; int cnt, status; IO_SB IOsb; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceNotifyWait()"); /* wait for the lock */ if (!Seconds) status = InstanceLockNotify (); else for (cnt = Seconds * 10; cnt; cnt--) { if (VMSok (status = InstanceLockNotify ())) break; if (status != SS$_NOTQUEUED) break; usleep (100 * 1000); /* 100 mS */ } /* if the lock was not available */ if (VMSnok (status)) return (status); status = InstanceNotifyNow (LockIndex, ValuePtr); if (VMSok (status)) { status = sys$getjpiw (EfnWait, 0, 0, &JpiItems, &IOsb, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSok (status)) { if (!(JpiAstAct & 0x08)) { /* user mode AST not active so wait for completion */ for (cnt = Seconds * 10; cnt; cnt--) { if (!InstanceNotifyNow (0, NULL)) break; usleep (100 * 1000); /* 100 mS */ } if (!cnt) status = SS$_TIMEOUT; } } } else InstanceUnLockNotify (); if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "!&S", status); return (status); } /*****************************************************************************/ /* NL locks indicate any other utility, etc., (e.g. HTTPDMON) that may have an interest in the server locks. Non-NL locks indicate active server interest in the resource. This function gets all locks associated with the specified lock resource and then goes through them noting each non-NL lock. From these it can set the count of the number of servers (number of CR locks) and/or create a list of the processes with these non-NL locks. It returns a pointer to a dynamically allocated string containing list of processes. THIS MUST BE FREED. */ int InstanceLockList ( int LockIndex, char *Separator, char **ListPtrPtr ) { static ushort JpiNodeNameLen, JpiPrcNamLen; static char JpiNodeName [7], JpiPrcNam [16]; static struct { ushort tot_len, /* bits 0..15 */ lck_len; /* bits 16..30 */ } LkiLocksLength; static VMS_ITEM_LIST3 JpiItems [] = { { sizeof(JpiNodeName)-1, JPI$_NODENAME, &JpiNodeName, &JpiNodeNameLen }, { sizeof(JpiPrcNam)-1, JPI$_PRCNAM, &JpiPrcNam, &JpiPrcNamLen }, { 0,0,0,0 } }; static VMS_ITEM_LIST3 LkiItems [] = { /* careful, values are dynamically assigned in code below! */ { 0, LKI$_LOCKS, 0, &LkiLocksLength }, {0,0,0,0} }; int cnt, status, ListBytes, LockCount, NonNlLockCount; char *aptr, *sptr; IO_SB IOsb; LKIDEF *lkiptr; LKIDEF LkiLocks [INSTANCE_REPORT_LOCK_MAX]; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) { WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceLockList()"); WatchDataDump (InstanceLockTable[LockIndex].Name, InstanceLockTable[LockIndex].NameLength); } NonNlLockCount = 0; if (ListPtrPtr) *ListPtrPtr = NULL; LkiItems[0].buf_addr = LkiLocks; LkiItems[0].buf_len = sizeof(LkiLocks); sys$setprv (1, &SysLckMask, 0, 0); status = sys$getlkiw (EfnWait, &InstanceLockTable[LockIndex].Lksb.lksb$l_lkid, &LkiItems, &IOsb, 0, 0, 0); sys$setprv (0, &SysLckMask, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSnok (status)) { ErrorNoticed (NULL, status, NULL, FI_LI); return (-1); } if (LkiLocksLength.tot_len) { if (LkiLocksLength.tot_len & 0x8000) { ErrorNoticed (NULL, SS$_BADPARAM, NULL, FI_LI); return (NULL); } LockCount = LkiLocksLength.tot_len / LkiLocksLength.lck_len; } else LockCount = 0; cnt = LockCount; for (lkiptr = &LkiLocks; cnt--; lkiptr++) { /* only interested in CR locks when not looking at supervisor */ if (LockIndex != INSTANCE_NODE_SUPERVISOR) if (lkiptr->lki$b_grmode == LCK$K_NLMODE) continue; NonNlLockCount++; } /* if not interested in generating a list */ if (!(Separator && ListPtrPtr && NonNlLockCount)) { if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "!UL", NonNlLockCount); return (NonNlLockCount); } ListBytes = sizeof(JpiNodeName) + sizeof(JpiPrcNam) + strlen(Separator); ListBytes *= LockCount; aptr = sptr = VmGet (ListBytes); /* use WORLD to allow access to other processes */ sys$setprv (1, &WorldMask, 0, 0); cnt = LockCount; for (lkiptr = &LkiLocks; cnt--; lkiptr++) { /* only interested in CR locks when not looking at supervisor */ if (LockIndex != INSTANCE_NODE_SUPERVISOR) if (lkiptr->lki$b_grmode == LCK$K_NLMODE) continue; status = sys$getjpiw (EfnWait, &lkiptr->lki$l_pid, 0, &JpiItems, &IOsb, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSnok (status)) { ErrorNoticed (NULL, status, NULL, FI_LI); continue; } JpiNodeName[JpiNodeNameLen] = '\0'; JpiPrcNam[JpiPrcNamLen] = '\0'; if (aptr[0]) strcpy (sptr, Separator); while (*sptr) sptr++; strcpy (sptr, JpiNodeName); while (*sptr) sptr++; strcpy (sptr, "::"); while (*sptr) sptr++; strcpy (sptr, JpiPrcNam); while (*sptr) sptr++; } sys$setprv (0, &WorldMask, 0, 0); *ListPtrPtr = aptr; if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchDataFormatted ("!UL !&Z\n", NonNlLockCount, aptr); return (NonNlLockCount); } /*****************************************************************************/ /* Using the lock IDs in the general and socket lock tables produce a report that lists all of the related locks, showing process PIDs, cluster nodes, etc. This is mainly intended as a debugging, development and trouble-shooting tool. */ InstanceLockReport (REQUEST_STRUCT *rqptr) { static char BeginPage [] = "

\n\ \n\
\n\
!#*   MSTLKID  MSTCSID  RQ GR QU LKID     \
CSID     PRCNAM          PID      VALBLK(!UL)\n";

   static char  MutexFao [] = "\n!18AZ  !11&L / !&L (!UL%)";

   static char  EndPage [] =
"
\n\ !AZ\ \n\ \n\ \n"; int idx, status, ResNameLength; ulong *vecptr; ulong FaoVector [32]; /*********/ /* begin */ /*********/ if (WATCHMOD (rqptr, WATCH_MOD_INSTANCE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_INSTANCE, "InstanceLockReport()"); InstanceLockReportNameWidth = 0; /* the socket locks index from zero!! */ for (idx = 0; idx < InstanceSocketCount; idx++) { ResNameLength = strlen(InstanceParseLockName(InstanceSocketTable[idx].Name)); if (ResNameLength > InstanceLockReportNameWidth) InstanceLockReportNameWidth = ResNameLength; } AdminPageTitle (rqptr, "Lock Report", BeginPage, InstanceLockReportNameWidth, SysInfo.LockValueBlockSize); /* use WORLD to allow access to other process' PID process names */ sys$setprv (1, &WorldMask, 0, 0); sys$setprv (1, &SysLckMask, 0, 0); /* the general locks index from one!! */ for (idx = 1; idx <= INSTANCE_LOCK_COUNT; idx++) InstanceLockReportData (rqptr, &InstanceLockTable[idx].Lksb.lksb$l_lkid); /* the socket locks index from zero!! */ for (idx = 0; idx < InstanceSocketCount; idx++) InstanceLockReportData (rqptr, &InstanceSocketTable[idx].Lksb.lksb$l_lkid); if (InstanceLockAdmin.Lksb.lksb$l_lkid) InstanceLockReportData (rqptr, &InstanceLockAdmin. Lksb.lksb$l_lkid); sys$setprv (0, &SysLckMask, 0, 0); sys$setprv (0, &WorldMask, 0, 0); InstanceMutexLock (INSTANCE_MUTEX_HTTPD); for (idx = 1; idx <= INSTANCE_MUTEX_COUNT; idx++) { vecptr = FaoVector; *vecptr++ = InstanceMutexDescr[idx]; *vecptr++ = HttpdGblSecPtr->MutexCount[idx]; *vecptr++ = HttpdGblSecPtr->MutexWaitCount[idx]; *vecptr++ = PercentOf (HttpdGblSecPtr->MutexWaitCount[idx], HttpdGblSecPtr->MutexCount[idx]); status = FaolToNet (rqptr, MutexFao, &FaoVector); if (VMSnok (status)) ErrorNoticed (rqptr, status, NULL, FI_LI); } InstanceMutexUnLock (INSTANCE_MUTEX_HTTPD); vecptr = FaoVector; *vecptr++ = AdminRefresh(); status = FaolToNet (rqptr, EndPage, &FaoVector); if (VMSnok (status)) ErrorNoticed (rqptr, status, NULL, FI_LI); rqptr->rqResponse.PreExpired = PRE_EXPIRE_ADMIN; ResponseHeader200 (rqptr, "text/html", &rqptr->NetWriteBufferDsc); AdminEnd (rqptr); } /*****************************************************************************/ /* Report on a single lock. */ InstanceLockReportData ( REQUEST_STRUCT *rqptr, ulong *LockIdPtr ) { static char LockDataFao [] = "!#AZ !8XL !8AZ !AZ !AZ !&@ !8XL !8AZ \ !15AZ !8XL !AZ\n"; static char *LockMode [] = { "NL","CR","CW","PR","PW","EX" }, /* lock state seems to range from -8 (RSPRESEND) to +1 (GR) */ *LockState [] = { "??","??","-8","-7","-6","-5","-4", "-3","-2","WT","CV","GR","??","??" }; static ulong JpiPrcNamLen, LkiResNamLen, LkiValBlkLen, Lki_XVALNOTVALID; static char NodeName [16], JpiPrcNam [16], JpiUserName [13], LkiResNam [31+1], LkiValBlk [LOCK_VALUE_BLOCK_64+1]; static struct { ushort tot_len, /* bits 0..15 */ lck_len; /* bits 16..30 */ } *lksptr, LkiLocksLen; static VMS_ITEM_LIST3 LkiItems [] = { /* careful, values are dynamically assigned in code below! */ { 0, 0, 0, 0 }, /* reserved for LKI$_LOCKS item */ { 0, 0, 0, 0 }, /* reserved for LKI$_RESNAM item */ { 0, 0, 0, 0 }, /* reserved for LKI$_[X]VALBLK item */ { 0, 0, 0, 0 }, /* reserved for LKI$_XVALNOTVALID item */ {0,0,0,0} }; static VMS_ITEM_LIST3 JpiItems [] = { { sizeof(JpiPrcNam)-1, JPI$_PRCNAM, &JpiPrcNam, &JpiPrcNamLen }, { sizeof(JpiUserName), JPI$_USERNAME, &JpiUserName, 0 }, { 0,0,0,0 } }; static VMS_ITEM_LIST3 SyiItems [] = { { sizeof(NodeName)-1, SYI$_NODENAME, &NodeName, 0 }, {0,0,0,0} }; int cnt, idx, status, LockCount, LockTotal; ulong *vecptr; ulong FaoVector [32]; char *cptr; char CsidNodeName [16], MstCsidNodeName [16], PidPrcNam [16], String [256]; IO_SB IOsb; LKIDEF *lkiptr; LKIDEF LkiLocks [INSTANCE_REPORT_LOCK_MAX]; /*********/ /* begin */ /*********/ if (WATCHMOD (rqptr, WATCH_MOD_INSTANCE)) WatchThis (WATCHITM(rqptr), WATCH_MOD_INSTANCE, "InstanceLockReportData() !8XL", *LockIdPtr); memset (LkiValBlk, 0, sizeof(LkiValBlk)); LkiItems[0].buf_len = sizeof(LkiLocks); LkiItems[0].buf_addr = &LkiLocks; LkiItems[0].item = LKI$_LOCKS; LkiItems[0].ret_len = &LkiLocksLen; LkiItems[1].buf_len = sizeof(LkiResNam); LkiItems[1].buf_addr = &LkiResNam; LkiItems[1].item = LKI$_RESNAM; LkiItems[1].ret_len = &LkiResNamLen; if (SysInfo.LockValueBlockSize == LOCK_VALUE_BLOCK_64) { LkiItems[2].buf_len = LOCK_VALUE_BLOCK_64; LkiItems[2].buf_addr = &LkiValBlk; LkiItems[2].item = LKI$_XVALBLK; LkiItems[2].ret_len = &LkiValBlkLen; LkiItems[3].buf_len = sizeof(Lki_XVALNOTVALID); LkiItems[3].buf_addr = &Lki_XVALNOTVALID; LkiItems[3].item = LKI$_XVALNOTVALID; } else { LkiItems[2].buf_len = LOCK_VALUE_BLOCK_16; LkiItems[2].buf_addr = &LkiValBlk; LkiItems[2].item = LKI$_VALBLK; LkiItems[2].ret_len = &LkiValBlkLen; /* in this case this terminates the item list */ LkiItems[3].buf_len = 0; LkiItems[3].buf_addr = 0; LkiItems[3].item = 0; Lki_XVALNOTVALID = 0; } status = sys$getlkiw (EfnWait, LockIdPtr, &LkiItems, &IOsb, 0, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSnok (status)) { ErrorNoticed (rqptr, status, NULL, FI_LI); return (status); } if (Lki_XVALNOTVALID) { /* hmmm, change in cluster composition? whatever! go back to 16 bytes */ SysInfo.LockValueBlockSize = LOCK_VALUE_BLOCK_16; ErrorNoticed (NULL, SS$_XVALNOTVALID, ErrorXvalNotValid, FI_LI); return (SS$_XVALNOTVALID); } LkiResNam[LkiResNamLen] = '\0'; LkiValBlk[LkiValBlkLen] = '\0'; lkiptr = LkiItems[0].buf_addr; lksptr = LkiItems[0].ret_len; if (lksptr->tot_len & 0x8000) { ErrorNoticed (rqptr, SS$_BADPARAM, NULL, FI_LI); return (SS$_BADPARAM); } if (lksptr->lck_len) LockCount = lksptr->tot_len / lksptr->lck_len; else LockCount = 0; if (!LockCount) { ErrorNoticed (NULL, status, ErrorSanityCheck, FI_LI); return (status); } for (cnt = 0; cnt < LockCount; cnt++, lkiptr++) { memset (NodeName, 0, sizeof(NodeName)); status = sys$getsyiw (EfnWait, &lkiptr->lki$l_mstcsid, 0, &SyiItems, &IOsb, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSnok (status)) { ErrorNoticed (rqptr, status, NULL, FI_LI); continue; } strcpy (MstCsidNodeName, NodeName); memset (NodeName, 0, sizeof(NodeName)); status = sys$getsyiw (EfnWait, &lkiptr->lki$l_csid, 0, &SyiItems, &IOsb, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSnok (status)) { ErrorNoticed (rqptr, status, NULL, FI_LI); continue; } strcpy (CsidNodeName, NodeName); memset (JpiPrcNam, 0, sizeof(JpiPrcNam)); status = sys$getjpiw (EfnWait, &lkiptr->lki$l_pid, 0, &JpiItems, &IOsb, 0, 0); if (VMSok (status)) status = IOsb.Status; if (VMSnok (status)) { ErrorNoticed (rqptr, status, NULL, FI_LI); continue; } JpiPrcNam[15] = JpiUserName[12] = '\0'; for (cptr = JpiUserName; *cptr && *cptr != ' '; cptr++); *cptr = '\0'; if (strsame (LkiValBlk, CONTROL_AUTH_SKELKEY, sizeof(CONTROL_AUTH_SKELKEY)-1) && !isdigit(LkiValBlk[sizeof(CONTROL_AUTH_SKELKEY)-1])) { /* mask any credentials */ LkiValBlk[sizeof(CONTROL_AUTH_SKELKEY)-1] = '*'; LkiValBlk[sizeof(CONTROL_AUTH_SKELKEY)] = '\0'; } vecptr = FaoVector; *vecptr++ = InstanceLockReportNameWidth; if (cnt) *vecptr++ = ""; else *vecptr++ = InstanceParseLockName(LkiResNam); *vecptr++ = lkiptr->lki$l_mstlkid; *vecptr++ = MstCsidNodeName; *vecptr++ = LockMode[lkiptr->lki$b_rqmode]; *vecptr++ = LockMode[lkiptr->lki$b_grmode]; if (lkiptr->lki$b_queue == 1) *vecptr++ = "!AZ"; else *vecptr++ = "!AZ"; *vecptr++ = LockState[lkiptr->lki$b_queue+10]; *vecptr++ = lkiptr->lki$l_lkid; *vecptr++ = CsidNodeName; *vecptr++ = JpiPrcNam; *vecptr++ = ADMIN_REPORT_SHOW_PROCESS; *vecptr++ = lkiptr->lki$l_pid; *vecptr++ = JpiUserName; *vecptr++ = lkiptr->lki$l_pid; *vecptr++ = LkiValBlk; status = FaolToNet (rqptr, LockDataFao, &FaoVector); if (VMSnok (status) || status == SS$_BUFFEROVF) ErrorNoticed (rqptr, status, NULL, FI_LI); } return (status); } /*****************************************************************************/ /* Parse the binary lock resource name into readable format suitable for display. Return a pointer to a static buffer containing that description. */ char* InstanceParseLockName (char *LockName) { static char String [128]; static char *LockUses [] = { INSTANCE_LOCK_USES }; static char ErrorOverflow [] = "***OVERFLOW***"; int cnt; uint Ip4Address; ushort IpPort; uchar Ip6Address [16]; char *cptr, *sptr, *zptr; /*********/ /* begin */ /*********/ if (WATCH_MODULE(WATCH_MOD_INSTANCE)) WatchThis (WATCHALL, WATCH_MOD_INSTANCE, "InstanceParseLockName()"); zptr = (sptr = String) + sizeof(String)-16; cptr = LockName; cnt = sizeof(HTTPD_NAME)-1; while (cnt-- && sptr < zptr) *sptr++ = *cptr++; /* version and environment number */ sptr += sprintf (sptr, "|%d|%d", (*cptr & 0xf0) >> 4, *cptr & 0x0f); if (sptr >= zptr) return (ErrorOverflow); cptr++; if (*cptr > INSTANCE_LOCK_PRINTABLE) { /* node name */ if (sptr < zptr) *sptr++ = '|'; while (*cptr > INSTANCE_LOCK_PRINTABLE && sptr < zptr) *sptr++ = *cptr++; } if (*cptr <= INSTANCE_LOCK_COUNT) { sptr += sprintf (sptr, "|%s", LockUses[*cptr]); if (sptr >= zptr) return (ErrorOverflow); *sptr = '\0'; return (String); } if (*cptr == INSTANCE_NODE_SOCKIP4) { cptr++; Ip4Address = *(UINTPTR)cptr; cptr += sizeof(uint); IpPort = *(USHORTPTR)cptr; sptr += sprintf (sptr, "|%s,%d", TcpIpAddressToString(Ip4Address,4), IpPort); } else if (*cptr == INSTANCE_NODE_SOCKIP6) { cptr++; memcpy (&Ip6Address, cptr, sizeof(Ip6Address)); cptr += sizeof(Ip6Address); IpPort = *(USHORTPTR)cptr; sptr += sprintf (sptr, "|%s,%d", TcpIpAddressToString(&Ip6Address,6), IpPort); } else *sptr++ = '?'; if (sptr >= zptr) return (ErrorOverflow); *sptr = '\0'; return (String); } /*****************************************************************************/