Beispiel #1
0
/*{
** Name: LGK_initialize()	-  initialize the lg/lk shared mem segment.
**
** Description:
**	This routine is called by the LGinitialize or LKinitialize routine.  IT
**	assumes that a previous caller has allocated the shared memory segment.
**
**	If it discovers that the shared memory segment has not yet been
**	initialized, it calls the LG and LK initialize-memory routines to do so.
**
** Inputs:
**	flag		- bit mask of:
**			  LOCK_LGK_MEMORY to lock the shared data segment
**			  LGK_IS_CSP if process is CSP process this node.
**
** Outputs:
**	sys_err		- place for system-specific error information.
**
**	Returns:
**	    OK	- success
**	    !OK - failure (CS*() routine failure, segment not mapped, ...)
**	
**  History:
**	Summer, 1992 (bryanp)
**	    Working on the new portable logging and locking system.
**	19-oct-1992 (bryanp)
**	    Check memory version number when attaching.
**	22-oct-1992 (bryanp)
**	    Change LGLKDATA.MEM to lglkdata.mem.
**	23-Oct-1992 (daveb)
**	    name the semaphore too.
**	13-feb-1993 (keving)
**	    Remove support for II_LGK_MEMORY_SIZE. If II_LG_MEMSIZE
**	    is not set then calculate memory size from PM values. 
**	24-may-1993 (bryanp)
**	    If the shared memory is the wrong version, don't install the
**	    at_exit handlers (the rundown routines won't be able to interpret
**	    the memory properly).
**	26-jul-1993 (jnash)
**	    Add 'flag' param lock the LGK data segment.
**	20-sep-1993 (bryanp)
**	    In addition to calling PCatexit, call (on VMS) sys$dclexh, since
**		there are some situations (image death and image rundown without
**		process rundown) which are caught neither by PCatexit (since
**		PCexit isn't run) nor by check-dead threads (since process
**		rundown never happened). This fixes a hole where an access-
**		violating ckpdb or auditdb command never got cleaned up.
**	31-jan-1994 (bryanp)
**	    Back out a few "features" which are proving countereffective:
**	    1) Don't bother checking mem_creator_pid to see if the previous
**		creator of the shared memory has died. This was an attempt to
**		gracefully re-use sticky shared memory following a system crash,
**		but it is suspected as being the culprit in a series of system
**		failures by re-initializing the shared memory at inopportune
**		times.
**	    2) Don't complain if the shared memory already exists but is of a
**		different size than you expected. Just go ahead and try to use
**		it anyway.
**	21-feb-1994 (bryanp)
**	    Reverse item (1) of the above 31-jan-1994 change and re-enable the
**		graceful re-use of shared memory. People weren't happy with
**		having to run ipcclean and csinstall all the time.
**	23-may-1994 (bryanp)
**	    On VMS, disable ^Y for LG/LK-aware processes. We don't want to allow
**		^Y because you might interrupt the process right in the middle
**		of an LG or LK operation, while holding the shared memory
**		semaphore, and this would then wedge the whole installation.
**          
**      17-May-1994 (daveb) 59127
**          Attach lgk_mem semaphore if we're attaching to the segment.
**      30-jan-1995 (lawst01) bug 61984
**          Use memory needed calculation from the 'lgk_calculate_size'
**          function to determine the size of the shared memory pool for
**          locking and locking. If the II_LG_MEMSIZE variable is specified
**          with a value larger than needed use the supplied value. If
**          lgk_calculate_size is unable to calculate a size then use the
**          magic number of 400000.  In addition issue a warning message
**          and continue executing in the event the number of pages
**          allocated is less than the number requested. 
**	24-apr-1997 (nanpr01)
**	    Reinstate Bryanp's change. In the process of fixing bug 61984
**	    by Steve Lawrence and subsequent undo of Steve's fix by Nick
**	    Ireland on 25-jun-96 (nick) caused the if 0 code removed.
**	    Part of the Steve's change was not reinstated such as not returning
**	    the status and exit and continue.
**	    1. Don't complain if the shared memory already exists but is of a
**	    different size than you expected. Just go ahead and try to use
**	    it.
**     18-aug-1998 (hweho01)
**          Reclaim the kernel resource if LG/LK shared memory segment is  
**          reinitialized. If the shared segment is re-used (the previous creator 
**          of the shared segment has died), the cross-process semaphores get 
**          initialized more than once at the same locations. That cause the
**          kernel resource leaks on DG/UX (OS release 4.11MU04). To fix the 
**          problem, CS_cp_sem_cleanup() is called to destroy all the 
**          semaphores before LG/LK shraed segment get recreated. 
**          CS_cp_sem_cleanup() is made dependent on xCL_NEED_SEM_CLEANUP and
**          OS_THREADS_USED, it returns immediately for most platforms.  
**	27-Mar-2000 (jenjo02)
**	    Added test for crossed thread types, refuse connection
**	    to LGK memory with E_DMA811_LGK_MT_MISMATCH.
**	18-apr-2001 (devjo01)
**	    s103715 (Portable cluster support)
**	    - Add CX mem requirement calculations.
**	    - Add LGK_IS_CSP flag to indicate that LGK memory is being
**	      initialized for a CSP process.
**	    - Add basic CX initialization.
**      19-sep-2002 (devjo01)
**          If running NUMA clustered allocate memory out of local RAD.
**	30-Apr-2003 (jenjo02)
**	    Rearchitected to silence long-tolerated race conditions.
**	    BUG 110121.
**	27-feb-2004 (devjo01)
**	    Rework allocation of CX shared memory to be compatible
**	    with race condition fix introduced for bug 110121.
**	29-Dec-2008 (jonj)
**	    If lgk_calculate_size() returns FAIL, the total memory
**	    needed exceeds MAX_SIZE_TYPE and we can't continue, but
**	    tell what we can about the needs of the various bits of
**	    memory before quitting.
**	06-Aug-2009 (wanfr01)
**	    Bug 122418 - Return E_DMA812 if LOCK_LGK_MUST_ATTACH is
**	    is passed in and memory segment does not exist
**      20-Nov-2009 (maspa05) bug 122642
**          In order to synchronize creation of UUIDs across servers added
**          a semaphore and a 'last time' variable into LGK memory. 
**      14-Dec-2009 (maspa05) bug 122642
**          #ifdef out the above change for Windows. The rest of the change
**          does not apply to Windows so the variables aren't defined.
*/
STATUS
LGK_initialize(
i4	  	flag,
CL_ERR_DESC	*sys_err,
char		*lgk_info)
{
    PTR		ptr;
    SIZE_TYPE	memleft;
    SIZE_TYPE	size;
    STATUS	ret_val;
    STATUS	mem_exists;
    char	mem_name[15];
    SIZE_TYPE	allocated_pages;
    i4		me_flags;
    i4		me_locked_flag;
    SIZE_TYPE	memory_needed;
    char	*nm_string;
    SIZE_TYPE	pages;
    LGK_MEM	*lgk_mem;
    i4		err_code;
    SIZE_TYPE   min_memory;
    i4		retries;
    i4		i;
    i4		attached;
    PID		*my_pid_slot;
    i4		clustered;
    u_i4	nodes;
    SIZE_TYPE	cxmemreq;
    PTR		pcxmem;
    LGLK_INFO	lgkcount;
    char	instid[4];

    CL_CLEAR_ERR(sys_err);

    /*
    ** if LGK_base is set then this routine has already been called.  It is
    ** set up so that both LGiniitalize and LKinitialize calls it, but only
    ** the first call does anything.
    */

    if (LGK_base.lgk_mem_ptr)
	return(OK);

    PCpid( &LGK_my_pid );

    memory_needed = 0;
    NMgtAt("II_LG_MEMSIZE", &nm_string);
    if (nm_string && *nm_string)
#if defined(LP64)
	if (CVal8(nm_string, (long*)&memory_needed))
#else
	if (CVal(nm_string, (i4 *)&memory_needed))
#endif /* LP64 */
	    memory_needed = 0;

    /* Always calculate memory needed from PM resource settings  */
    /* and compare with supplied value, if supplied value is less */
    /* than minimum then use minimum                             */

    min_memory = 0;
    if ( OK == lgk_get_counts(&lgkcount, FALSE))
    {
	if ( lgk_calculate_size(FALSE, &lgkcount, &min_memory) )
	{
	    /*
	    ** Memory exceeds MAX_SIZE_TYPE, can't continue.
	    ** 
	    ** Do calculation again, this time with "wordy"
	    ** so user can see allocation bits, then quit.
	    */
	    lgk_calculate_size(TRUE, &lgkcount, &min_memory);
	    return (E_DMA802_LGKINIT_ERROR); 
	}
    }
    if (min_memory)
       memory_needed = (memory_needed < min_memory) ? min_memory
                                                    : memory_needed;
    else
       memory_needed = (memory_needed < 400000 ) ? 400000 
                                                 : memory_needed;

    clustered = (i4)CXcluster_enabled();
    cxmemreq = 0;
    if ( clustered )
    {

	if ( OK != CXcluster_nodes( &nodes, NULL ) )
	    nodes = 0;
	cxmemreq = CXshm_required( 0, nodes, lgkcount.lgk_max_xacts,
		    lgkcount.lgk_max_locks, lgkcount.lgk_max_resources );
	if ( MAX_SIZE_TYPE - memory_needed < cxmemreq )
	{
	    /*
	    ** Memory exceeds MAX_SIZE_TYPE, can't continue.
	    ** 
	    ** Do calculation again, this time with "wordy"
	    ** so user can see allocation bits, then quit.
	    */
	    SIprintf("Total LG/LK/CX allocation exceeds max of %lu bytes by %lu\n"
	    	     "Adjust logging/locking configuration values and try again\n",
		         MAX_SIZE_TYPE, cxmemreq - (MAX_SIZE_TYPE - memory_needed));
	    lgk_calculate_size(TRUE, &lgkcount, &min_memory);
	    return (E_DMA802_LGKINIT_ERROR); 
	}
	memory_needed += cxmemreq;
    }

    if ( memory_needed < MAX_SIZE_TYPE - ME_MPAGESIZE )
	pages = (memory_needed + ME_MPAGESIZE - 1) / ME_MPAGESIZE;
    else
        pages = memory_needed / ME_MPAGESIZE;

    /*
    ** Lock the LGK segment if requested to do so
    */
    if (flag & LOCK_LGK_MEMORY)
	me_locked_flag = ME_LOCKED_MASK;
    else
	me_locked_flag = 0;

    me_flags = (me_locked_flag | ME_MSHARED_MASK | ME_IO_MASK | 
		ME_CREATE_MASK | ME_NOTPERM_MASK | ME_MZERO_MASK);
    if (CXnuma_user_rad())
        me_flags |= ME_LOCAL_RAD;

    STcopy("lglkdata.mem", mem_name);

    /*
    ** In general, we just want to attach to the shared memory and detect if
    ** we are the first process to do so. However, there are ugly race
    ** conditions to consider, as well as complications because the shared
    ** memory may be left around following a system crash.
    **
    ** First we attempt to create the shared memory. Usually it already exists,
    ** so we check for and handle the case of "already exists".
    */

    /*
    ** (jenjo02)
    **
    ** Restructured to better handle all those ugly race conditions
    ** which are easily reproduced by running two scripts, one that
    ** continuously executes "lockstat" while the other is starting
    ** and stopping Ingres.
    **
    ** For example,
    **
    **		lockstat A	acquires and init's the memory
    **		RCP		attaches to "A" memory
    **		lockstat A	terminates normally
    **		lockstat B	attaches to "A" memory, sees that
    **				"A"s pid is no longer alive, and
    **				reinitializes the memory, much to
    **				the RCP's chagrin.
    ** or (more commonly)
    **
    **		lockstat A	acquires and begins to init the mem
    **		RCP		attaches to "A" memory which is
    **				still being zero-filled by lockstat,
    **				checks the version number (zero),
    **				and fails with a E_DMA434 mismatch.
    **
    ** The fix utilizes the mem_ext_sem to synchronize multiple
    ** processes; if the semaphore hasn't been initialized or
    ** if mem_version_no is zero, we'll wait one second and retry,
    ** up to 60 seconds before giving up. This gives the creating
    ** process time to complete initialization of the memory.
    **
    ** Up to LGK_MAX_PIDS are allowed to attach to the shared
    ** memory. When a process attaches it sets its PID in the
    ** first vacant slot in lgk_mem->mem_pid[]; if there are
    ** no vacant slots, the attach is refused. When the process
    ** terminates normally by calling LGK_rundown(), it zeroes
    ** its PID slot.
    **
    ** When attaching to an existing segment, we check if  
    ** there are any live processes still using the memory;
    ** if so, we can't destroy it (no matter who created it).
    ** If there are no live processes attached to the memory,
    ** we destroy and reallocate it (based on current config.dat
    ** settings).
    */

    for ( retries = 0; ;retries++ )
    {
	LGK_base.lgk_mem_ptr = (PTR)NULL;
	
	/* Give up if unable to get memory in one minute */
#if defined(conf_CLUSTER_BUILD)
        if (retries > 1)
#else
	if ( retries )
#endif
	{
	    if ( retries < 60 )
		PCsleep(1000);
	    else
	    {
		/* Another process has it blocked way too long */
		uleFormat(NULL, E_DMA800_LGKINIT_GETMEM, (CL_ERR_DESC *)NULL,
				ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0);
		/* Unable to attach allocated shared memory segment. */
		return (E_DMA802_LGKINIT_ERROR); 
	    }
	}

	ret_val = MEget_pages(me_flags,
				pages, mem_name, (PTR*)&lgk_mem,
				&allocated_pages, sys_err);

	if ( mem_exists = ret_val )
	{
	    if (ret_val == ME_ALREADY_EXISTS)
	    {
		ret_val = MEget_pages((me_locked_flag | 
				       ME_MSHARED_MASK | ME_IO_MASK),
				      pages, mem_name, (PTR*)&lgk_mem,
				      &allocated_pages, sys_err);
#if defined(conf_CLUSTER_BUILD)
                if (ret_val && !retries)
                    continue;  /* try one more time */
#endif
	    }
	    if (ret_val)
	    {
		uleFormat(NULL, ret_val, sys_err,
				ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0);
		uleFormat(NULL, E_DMA800_LGKINIT_GETMEM, (CL_ERR_DESC *)NULL,
				ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0);
		/* Unable to attach allocated shared memory segment. */
		return (E_DMA802_LGKINIT_ERROR); 
	    }
	}
	else if (flag & LOCK_LGK_MUST_ATTACH)
	{	
	    /* Do not use the shared segment you just allocated */
	    MEfree_pages((PTR)lgk_mem, allocated_pages, sys_err);
	    return (E_DMA812_LGK_NO_SEGMENT); 
	}

	size = allocated_pages * ME_MPAGESIZE;

	/* Expose this process to the memory */
	LGK_base.lgk_mem_ptr = (PTR)lgk_mem;

	if ( mem_exists )
	{
	    /*
	    ** Memory exists.
	    **
	    ** Try to acquire the semaphore. If it's
	    ** uninitialzed, retry from the top.
	    **
	    ** If the version is zero, then another
	    ** process is initializing the memory;
	    ** keep retrying until the version is 
	    ** filled in.
	    **
	    */
	    if ( ret_val = CSp_semaphore(1, &lgk_mem->mem_ext_sem) )
	    {
		if ( ret_val != E_CS000A_NO_SEMAPHORE )
		{
		    uleFormat(NULL, ret_val, sys_err,
				ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0);
		    ret_val = E_DMA802_LGKINIT_ERROR;
		    break;
		}
		continue;
	    }

	    /* Retry if still being init'd by another process */
	    if ( !lgk_mem->mem_version_no )
	    {
		CSv_semaphore(&lgk_mem->mem_ext_sem);
		continue;
	    }

	    /*
	    ** Check pids which appear to be attached to
	    ** the memory:
	    **
	    ** If any process is still alive, then we
	    ** assume the memory is consistent and use it.
	    **
	    ** If a process is now dead, it terminated
	    ** without going through LGK_rundown
	    ** to zero its PID slot, zero it now.
	    **
	    ** If there are no live PIDs attached to 
	    ** the memory, we destroy and recreate it.
	    */
	    my_pid_slot = (PID*)NULL;
	    attached = 0;

	    for ( i = 0; i < LGK_MAX_PIDS; i++ )
	    {
		if ( lgk_mem->mem_pid[i] && 
		     PCis_alive(lgk_mem->mem_pid[i]) )
		{
		    attached++;
		}
		else
		{
		    /* Vacate the slot */
		    if (lgk_mem->mem_pid[i])
		    {
			uleFormat(NULL, E_DMA499_DEAD_PROCESS_INFO, (CL_ERR_DESC *)NULL,
				ULE_LOG, NULL, NULL, 0, NULL, &err_code, 2,
				0, lgk_mem->mem_pid[i],
				0, lgk_mem->mem_info[i].info_txt);
		    }
		    lgk_mem->mem_pid[i] = (PID)0;
		    lgk_mem->mem_info[i].info_txt[0] = EOS;

		    /* Use first vacant slot for this process */
		    if ( !my_pid_slot )
		    {
			my_pid_slot = &lgk_mem->mem_pid[i];
			LGK_base.lgk_pid_slot = i;
		    }
		}
		/* Quit when both questions answered */
		if ( attached && my_pid_slot )
		    break;
	    }

	    /* If no living pids attached, destroy/reallocate */
	    if ( !attached )
	    {
		CSv_semaphore(&lgk_mem->mem_ext_sem);
		if ( LGK_destroy(allocated_pages, sys_err) )
		{
		    ret_val = E_DMA802_LGKINIT_ERROR;
		    break;
		}
		continue;
	    }

	    /* All attached pids alive? */
	    if ( !my_pid_slot )
	    {
		/* ... then there's no room for this process */
		uleFormat(NULL, E_DMA80A_LGK_ATTACH_LIMIT, (CL_ERR_DESC *)NULL,
		    ULE_LOG, NULL, NULL, 0, NULL, &err_code, 1,
		    0, attached);
	        ret_val = E_DMA802_LGKINIT_ERROR;
	    }
	    else if (lgk_mem->mem_version_no != LGK_MEM_VERSION_CURRENT)
	    {
		uleFormat(NULL, E_DMA434_LGK_VERSION_MISMATCH, (CL_ERR_DESC *)NULL,
		    ULE_LOG, NULL, NULL, 0, NULL, &err_code, 2,
		    0, lgk_mem->mem_version_no, 0, LGK_MEM_VERSION_CURRENT);
		ret_val = E_DMA435_WRONG_LGKMEM_VERSION;
	    }
	    /*
	    ** Don't allow mixed connections of MT/non-MT processes.
	    ** Among other things, the mutexing mechanisms are 
	    ** incompatible!
	    */
	    else if ( (CS_is_mt() && (lgk_mem->mem_status & LGK_IS_MT) == 0) ||
		     (!CS_is_mt() &&  lgk_mem->mem_status & LGK_IS_MT) )
	    {
		uleFormat(NULL, E_DMA811_LGK_MT_MISMATCH, (CL_ERR_DESC *)NULL,
		    ULE_LOG, NULL, NULL, 0, NULL, &err_code, 2,
		    0, (lgk_mem->mem_status & LGK_IS_MT) ? "OS"
							 : "INTERNAL",
		    0, (CS_is_mt()) ? "OS"
				    : "INTERNAL");
		ret_val = E_DMA802_LGKINIT_ERROR;
	    }
	    else
	    {
		/*
		** CX memory (if any) will lie immediately past LGK header.
		*/
		pcxmem = (PTR)(lgk_mem + 1);
		pcxmem = (PTR)ME_ALIGN_MACRO(pcxmem, sizeof(ALIGN_RESTRICT));

		LGK_base.lgk_lkd_ptr = (char *)LGK_base.lgk_mem_ptr +
					lgk_mem->mem_lkd;
		LGK_base.lgk_lgd_ptr = (char *)LGK_base.lgk_mem_ptr +
					lgk_mem->mem_lgd;
		
		/* Stuff our pid in first vacant slot */
		*my_pid_slot = LGK_my_pid;
		STlcopy(lgk_info, lgk_mem->mem_info[i].info_txt, LGK_INFO_SIZE-1);
	    }

#if defined(VMS) || defined(UNIX)
	    /* set up pointers to reference the uuid mutex and last time
	     * variable */

	    if (!ID_uuid_sem_ptr)
           	ID_uuid_sem_ptr=&lgk_mem->id_uuid_sem;

	    if (!ID_uuid_last_time_ptr)
                ID_uuid_last_time_ptr=&lgk_mem->uuid_last_time;

	    if (!ID_uuid_last_cnt_ptr)
                ID_uuid_last_cnt_ptr=&lgk_mem->uuid_last_cnt;
#endif

	    CSv_semaphore(&lgk_mem->mem_ext_sem);
	}
	else
	{

	    /* Memory did not exist */
	    /* Zero the version to keep other processes out */
	    lgk_mem->mem_version_no = 0;

#if defined(VMS) || defined(UNIX)
	    /* set up the uuid mutex and last time pointers to
	     * reference the objects in shared memory */

	    {
	        STATUS id_stat;

	        ID_uuid_sem_ptr=&lgk_mem->id_uuid_sem;
                ID_uuid_last_time_ptr=&lgk_mem->uuid_last_time;
                ID_uuid_last_cnt_ptr=&lgk_mem->uuid_last_cnt;
	        *ID_uuid_last_cnt_ptr=0;
	        ID_UUID_SEM_INIT(ID_uuid_sem_ptr,CS_SEM_MULTI,"uuid sem",
				&id_stat);
	    }
#endif

	    /* ... then initialize the mutex */
	    CSw_semaphore(&lgk_mem->mem_ext_sem, CS_SEM_MULTI,
	    			    "LGK mem ext sem" );

	    /* Record if memory created for MT or not */
	    if ( CS_is_mt() )
		lgk_mem->mem_status = LGK_IS_MT;

	    /*
	    ** memory is as follows:
	    **
	    **	-----------------------------------------------------------|
	    **	| LGK_MEM struct (keep track of this mem)	           |
	    **	|							   |
	    **	-----------------------------------------------------------|
	    **	| If a clustered installation memory reserved for CX       |
	    **	|							   |
	    **	------------------------------------------------------------
	    **	| LKD - database of info for lk system			   |
	    **	|							   |
	    **	------------------------------------------------------------
	    **	| LGD - database of info for lg system			   |
	    **	|							   |
	    **	------------------------------------------------------------
	    **	| memory manipulated by LGKm_* routines for structures used |
	    **	| by both the lk and lg systems.			   |
	    **	|							   |
	    **	------------------------------------------------------------
	    */

	    /* put the LGK_MEM struct at head of segment leaving ptr pointing 
	    ** at next aligned piece of memory
	    */

	    /*
	    ** CX memory (if any) will lie immediately past LGK header.
	    */
	    pcxmem = (PTR)(lgk_mem + 1);
	    pcxmem = (PTR)ME_ALIGN_MACRO(pcxmem, sizeof(ALIGN_RESTRICT));

	    LGK_base.lgk_lkd_ptr = pcxmem + cxmemreq;
	    LGK_base.lgk_lkd_ptr = (PTR) ME_ALIGN_MACRO(LGK_base.lgk_lkd_ptr,
						sizeof(ALIGN_RESTRICT));
	    lgk_mem->mem_lkd = (i4)((char *)LGK_base.lgk_lkd_ptr -
					 (char *)LGK_base.lgk_mem_ptr);

	    LGK_base.lgk_lgd_ptr = (PTR) ((char *) LGK_base.lgk_lkd_ptr +
					    sizeof(LKD));
	    LGK_base.lgk_lgd_ptr = (PTR) ME_ALIGN_MACRO(LGK_base.lgk_lgd_ptr,
						sizeof(ALIGN_RESTRICT));
	    lgk_mem->mem_lgd = (i4)((char *)LGK_base.lgk_lgd_ptr -
					 (char *)LGK_base.lgk_mem_ptr);

	    /* now initialize the rest of memory for allocation */

	    /* how much memory is left? */

	    ptr = ((char *)LGK_base.lgk_lgd_ptr + sizeof(LGD));
	    memleft = size - (((char *) ptr) - ((char *) LGK_base.lgk_mem_ptr));

	    if ( (ret_val = lgkm_initialize_mem(memleft, ptr)) == OK &&
		 (ret_val = LG_meminit(sys_err)) == OK &&
		 (ret_val = LK_meminit(sys_err)) == OK )
	    {
		/* Clear array of attached pids and pid info */
		for ( i = 0; i < LGK_MAX_PIDS; i++ )
		{
		    lgk_mem->mem_pid[i] = (PID)0;
		    lgk_mem->mem_info[i].info_txt[0] = EOS;
		}

		/* Set the creator pid */
		LGK_base.lgk_pid_slot = 0;
		lgk_mem->mem_creator_pid = LGK_my_pid;

		/* Set the version, releasing other processes */
		lgk_mem->mem_version_no = LGK_MEM_VERSION_CURRENT;
	    }
	    else
	    {
		uleFormat(NULL, ret_val, (CL_ERR_DESC *)NULL,
			    ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0);
		ret_val = E_DMA802_LGKINIT_ERROR;

		/* Destroy the shared memory */
		LGK_destroy(allocated_pages, sys_err);
	    }
	}

	if ( ret_val == OK )
	{
	    PCatexit(LGK_rundown);

	    if ( clustered )
	    {
		/*
		** Perform preliminary cluster connection and CX memory init.
		*/

		/* Get installation code */
		NMgtAt("II_INSTALLATION", &nm_string);
		if ( nm_string )
		{
		    instid[0] = *(nm_string);
		    instid[1] = *(nm_string+1);
		}
		else
		{
		    instid[0] = 'A';
		    instid[1] = 'A';
		}
		instid[2] = '\0';
		ret_val = CXinitialize( instid, pcxmem, flag & LGK_IS_CSP );
		if ( ret_val )
		{
		    /* Report error returned from CX */
		    uleFormat(NULL, ret_val, (CL_ERR_DESC *)NULL,
			ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0 );
		    break;
		}
	    }

#ifdef VMS
	    {
	    static $EXHDEF	    exit_block;
	    i4			ctrl_y_mask = 0x02000000;

	    /*
	    ** On VMS, programs like the dmfjsp and logstat run as images in
	    ** the shell process. That is, the system doesn't start and stop
	    ** a process for each invocation of the program, it just starts
	    ** and stops an image in the same process. This means that if
	    ** the program should die, the image may be rundown but the process
	    ** will remain, which means that the check-dead threads of other
	    ** processes in the installation will not feel that they need to
	    ** rundown this process, since it's still alive.
	    **
	    ** By declaring an exit handler, which will get a chance to run
	    ** even if PCexit isn't called, we improve our chances of getting
	    ** to perform rundown processing if we should die unexpectedly.
	    **
	    ** Furthermore, we ask DCL to disable its ^Y processing, which
	    ** lessens the chance that the user will interrupt us while we
	    ** are holding the semaphore.
	    */
	    exit_block.exh$g_func = LGK_rundown;
	    exit_block.exh$l_argcount = 1;
	    exit_block.exh$gl_value = &exit_block.exh$l_status;

	    if (sys$dclexh(&exit_block) != SS$_NORMAL)
		ret_val = FAIL;

	    lib$disable_ctrl(&ctrl_y_mask, 0);
	    }
#endif
	}
	break;
    }

    if ( ret_val )
	LGK_base.lgk_mem_ptr = NULL;

    return(ret_val);
}
Beispiel #2
0
/*
**++
**  ROUTINE:	sp_open
**
**  FUNCTIONAL DESCRIPTION:
**
**  	Spawns a subprocess, possibly passing it an initial command.
**
**  RETURNS:	cond_value, longword (unsigned), write only, by value
**
**  PROTOTYPE:
**
**  	sp_open(SPHANDLE *ctxpp, struct dsc$descriptor *inicmd,
**  	    	    unsigned int (*rcvast)(void *), void *rcvastprm);
**
**  IMPLICIT INPUTS:	None.
**
**  IMPLICIT OUTPUTS:	None.
**
**  COMPLETION CODES:
**  	    SS$_NORMAL:	    Normal successful completion.
**
**  SIDE EFFECTS:   	None.
**
**--
*/
unsigned int sp_open (SPHANDLE *ctxpp, void *inicmd, unsigned int (*rcvast)(void *), void *rcvastprm) {

    SPHANDLE ctx;
    unsigned int dvi_devnam = DVI$_DEVNAM, dvi_devbufsiz = DVI$_DEVBUFSIZ;
    unsigned int spawn_flags = CLI$M_NOWAIT|CLI$M_NOKEYPAD;
    unsigned int status;
    struct dsc$descriptor inbox, outbox;

    status = lib$get_vm(&spb_size, &ctx);
    if (!OK(status)) return status;

/*
** Assign the SPHANDLE address for the caller immediately to avoid timing issues with
** WRTATTN AST that passes the ctx as rcvastprm (which sp_once does).
*/
    *ctxpp = ctx;
    ctx->sendque.head = ctx->sendque.tail = &ctx->sendque;
    ctx->ok_to_send = 0;

/*
** Create the mailboxes we'll be using for I/O with the subprocess
*/
    status = sys$crembx(0, &ctx->inchn, 1024, 1024, 0xff00, 0, 0, 0);
    if (!OK(status)) {
    	lib$free_vm(&spb_size, &ctx);
    	return status;
    }
    status = sys$crembx(0, &ctx->outchn, 1024, 1024, 0xff00, 0, 0, 0);
    if (!OK(status)) {
    	sys$dassgn(ctx->inchn);
    	lib$free_vm(&spb_size, &ctx);
    	return status;
    }

/*
** Now that they're created, let's find out what they're called so we
** can tell LIB$SPAWN
*/
    INIT_DYNDESC(inbox);
    INIT_DYNDESC(outbox);
    lib$getdvi(&dvi_devnam, &ctx->inchn, 0, 0, &inbox);
    lib$getdvi(&dvi_devnam, &ctx->outchn, 0, 0, &outbox);
    lib$getdvi(&dvi_devbufsiz, &ctx->outchn, 0, &ctx->bufsiz);

/*
** Create the output buffer for the subprocess.
*/
    status = lib$get_vm(&ctx->bufsiz, &ctx->bufptr);
    if (!OK(status)) {
    	sys$dassgn(ctx->outchn);
    	sys$dassgn(ctx->inchn);
    	str$free1_dx(&inbox);
    	str$free1_dx(&outbox);
    	lib$free_vm(&spb_size, &ctx);
    	return status;
    }

/*
** Set the "receive AST" routine to be invoked by SP_WRTATTN_AST
*/
    ctx->rcvast = rcvast;
    ctx->astprm = rcvastprm;
    sys$qiow(0, ctx->outchn, IO$_SETMODE|IO$M_WRTATTN, 0, 0, 0,
    	sp_wrtattn_ast, ctx, 0, 0, 0, 0);
    sys$qiow(0, ctx->inchn, IO$_SETMODE|IO$M_READATTN, 0, 0, 0,
    	sp_readattn_ast, ctx, 0, 0, 0, 0);

/*
** Get us a termination event flag
*/
    status = lib$get_ef(&ctx->termefn);
    if (OK(status)) lib$get_ef(&ctx->inefn);
    if (OK(status)) lib$get_ef(&ctx->outefn);
    if (!OK(status)) {
    	sys$dassgn(ctx->outchn);
    	sys$dassgn(ctx->inchn);
    	str$free1_dx(&inbox);
    	str$free1_dx(&outbox);
    	lib$free_vm(&ctx->bufsiz, &ctx->bufptr);
    	lib$free_vm(&spb_size, &ctx);
    	return status;
    }

/*
** Now create the subprocess
*/
    status = lib$spawn(inicmd, &inbox, &outbox, &spawn_flags, 0, &ctx->pid,
    	    0, &ctx->termefn);
    if (!OK(status)) {
    	lib$free_ef(&ctx->termefn);
    	lib$free_ef(&ctx->outefn);
    	lib$free_ef(&ctx->inefn);
    	sys$dassgn(ctx->outchn);
    	sys$dassgn(ctx->inchn);
    	str$free1_dx(&inbox);
    	str$free1_dx(&outbox);
    	lib$free_vm(&ctx->bufsiz, &ctx->bufptr);
    	lib$free_vm(&spb_size, &ctx);
    	return status;
    }

/*
** Set up the exit handler, if we haven't done so already
*/
    status = sys$setast(0);
    if (!exh_declared) {
    	sys$dclexh(&exhblk);
    	exh_declared = 1;
    }
    if (status == SS$_WASSET) sys$setast(1);

/*
** Save the SPB in our private queue
*/
    queue_insert(ctx, spque.tail);

/*
** Clean up and return
*/
    str$free1_dx(&inbox);
    str$free1_dx(&outbox);

    return SS$_NORMAL;

} /* sp_open */
gtcm_server()
{
	static readonly int4	reptim[2] = {-100000, -1};	/* 10ms */
       	static readonly int4	wait[2] =  {-1000000, -1};	/* 100ms */
	void		gtcm_ch(), gtcm_exi_handler(), gtcm_init_ast(), gtcm_int_unpack(), gtcm_mbxread_ast(),
			gtcm_neterr(), gtcm_read_ast(), gtcm_remove_from_action_queue(), gtcm_shutdown_ast(), gtcm_write_ast(),
			la_freedb();
	bool		gtcm_link_accept();
	bool		alid;
	char		buff[512];
	char		*h = NULL;
	char		*la_getdb();
	char		nbuff[256];
	char		*pak = NULL;
	char		reply;
	unsigned short	outlen;
	int4		closewait[2] = {0, -1};
	int4		inid = 0, mdl = 0, nid = 0, days = 0;
	int4		lic_status;
	int4		lic_x;
	int4		lm_mdl_nid();
	uint4		status;
	int		i, receive(), value;
	mstr		name1, name2;
	struct NTD	*cmu_ntdroot();
	connection_struct *prev_curr_entry;
	struct	dsc$descriptor_s	dprd;
	struct	dsc$descriptor_s	dver;
	$DESCRIPTOR(node_name, nbuff);
	$DESCRIPTOR(proc_name, "GTCM_SERVER");
	$DESCRIPTOR(timout, buff);
	DCL_THREADGBL_ACCESS;

	GTM_THREADGBL_INIT;
        assert(0 == EMPTY_QUEUE);       /* check so dont need gdsfhead everywhere */
	common_startup_init(GTCM_GNP_SERVER_IMAGE); /* Side-effect: Sets skip_dbtriggers to TRUE for non-trigger platforms */
	gtm_env_init();	/* read in all environment variables */
	name1.addr = "GTCMSVRNAM";
	name1.len = SIZEOF("GTCMSVRNAM") - 1;
	status = trans_log_name(&name1, &name2, nbuff);
	if (SS$_NORMAL == status)
	{
		proc_name.dsc$a_pointer = nbuff;
		proc_name.dsc$w_length = node_name.dsc$w_length = name2.len;
	} else if (SS$_NOLOGNAM == status)
	{
		MEMCPY_LIT(nbuff, "GTCMSVR");
		node_name.dsc$w_length = SIZEOF("GTCMSVR") - 1;
	} else
		rts_error_csa(CSA_ARG(NULL) VARLSTCNT(1) status);
	sys$setprn(&proc_name);
	status = lib$get_foreign(&timout, 0, &outlen, 0);
	if ((status & 1) && (6 > outlen))
	{
		for (i = 0;  i < outlen;  i++)
		{
			value = value * 10;
			if (buff[i] <= '9' && buff[i] >= '0')
				value += buff[i] - 48;
			else
				break;
		}
		if (outlen && (i == outlen))
		{
			cm_timeout = TRUE;
			closewait[0] = value * -10000000;
		}
	}
	dprd.dsc$w_length = cm_prd_len;
	dprd.dsc$b_dtype  = DSC$K_DTYPE_T;
	dprd.dsc$b_class  = DSC$K_CLASS_S;
	dprd.dsc$a_pointer= cm_prd_name;
	dver.dsc$w_length = cm_ver_len;
	dver.dsc$b_dtype  = DSC$K_DTYPE_T;
	dver.dsc$b_class  = DSC$K_CLASS_S;
	dver.dsc$a_pointer= cm_ver_name;
	ast_init();
	licensed = TRUE;
	lkid = 2;
#	ifdef NOLICENSE
	lid = 1;
#	else
	/* this code used to be scattered to discourage reverse engineering, but since it now disabled, that seems pointless */
	lic_status = ((NULL == (h = la_getdb(LMDB))) ? LP_NOCNFDB : SS$_NORMAL);
	lic_status = ((1 == (lic_status & 1)) ? lm_mdl_nid(&mdl, &nid, &inid) : lic_status);
	lic_status = ((1 == (lic_status & 1)) ? lp_licensed(h, &dprd, &dver, mdl, nid, &lid, &lic_x, &days, pak) : lic_status);
	if (LP_NOCNFDB != lic_status)
		la_freedb(h);
	if (1 == (lic_status & 1))
	{
		licensed = TRUE;
		if (days < 14)
			rts_error_csa(CSA_ARG(NULL) VARLSTCNT(1) ERR_WILLEXPIRE);
	} else
	{
		licensed = FALSE;
		sys$exit(lic_status);
	}
#	endif
	gtcm_ast_avail = astq_dyn_avail - GTCM_AST_OVRHD;
	stp_init(STP_INITSIZE);
	rts_stringpool = stringpool;
	cache_init();
	procnum = 0;
	get_proc_info(0, TADR(login_time), &image_count);
        memset(proc_to_clb, 0, SIZEOF(proc_to_clb));
	status = cmi_init(&node_name, 0, 0, gtcm_init_ast, gtcm_link_accept);
	if (!(status & 1))
	{
		rts_error_csa(CSA_ARG(NULL) VARLSTCNT(1) ((status ^ 3) | 4));
		sys$exit(status);
	}
	ntd_root = cmu_ntdroot();
	ntd_root->mbx_ast =  gtcm_mbxread_ast;
	ntd_root->err = gtcm_neterr;
	gtcm_connection = FALSE;
	lib$establish(gtcm_ch);
	gtcm_exi_blk.exit_hand = &gtcm_exi_handler;
	gtcm_exi_blk.arg_cnt = 1;
	gtcm_exi_blk.cond_val = &gtcm_exi_condition;
	sys$dclexh(&gtcm_exi_blk);
	INVOKE_INIT_SECSHR_ADDRS;
	initialize_pattern_table();
	assert(run_time); /* Should have been set by common_startup_init */
	while (!cm_shutdown)
	{
		if (blkdlist)
			gtcml_chkreg();

		assert(!lib$ast_in_prog());
		status = sys$dclast(&gtcm_remove_from_action_queue, 0, 0);
		if (SS$_NORMAL != status)
			rts_error_csa(CSA_ARG(NULL) VARLSTCNT(4) CMERR_CMSYSSRV, 0, status, 0);
		if (INTERLOCK_FAIL == curr_entry)
			rts_error_csa(CSA_ARG(NULL) VARLSTCNT(1) CMERR_CMINTQUE);
		if (EMPTY_QUEUE != curr_entry)
		{
			switch (*curr_entry->clb_ptr->mbf)
			{
				case CMMS_L_LKCANALL:
					reply = gtcmtr_lkcanall();
					break;
				case CMMS_L_LKCANCEL:
					reply = gtcmtr_lkcancel();
					break;
				case CMMS_L_LKREQIMMED:
					reply = gtcmtr_lkreqimmed();
					break;
				case CMMS_L_LKREQNODE:
					reply = gtcmtr_lkreqnode();
					break;
				case CMMS_L_LKREQUEST:
					reply = gtcmtr_lkrequest();
					break;
				case CMMS_L_LKRESUME:
					reply = gtcmtr_lkresume();
					break;
				case CMMS_L_LKACQUIRE:
					reply = gtcmtr_lkacquire();
					break;
				case CMMS_L_LKSUSPEND:
					reply = gtcmtr_lksuspend();
					break;
				case CMMS_L_LKDELETE:
					reply = gtcmtr_lkdelete();
					break;
				case CMMS_Q_DATA:
					reply = gtcmtr_data();
					break;
				case CMMS_Q_GET:
					reply = gtcmtr_get();
					break;
				case CMMS_Q_KILL:
					reply = gtcmtr_kill();
					break;
				case CMMS_Q_ORDER:
					reply = gtcmtr_order();
					break;
				case CMMS_Q_PREV:
					reply = gtcmtr_zprevious();
					break;
				case CMMS_Q_PUT:
					reply = gtcmtr_put();
					break;
				case CMMS_Q_QUERY:
					reply = gtcmtr_query();
					break;
				case CMMS_Q_ZWITHDRAW:
					reply = gtcmtr_zwithdraw();
					break;
				case CMMS_S_INITPROC:
					reply = gtcmtr_initproc();
					break;
				case CMMS_S_INITREG:
					reply = gtcmtr_initreg();
					break;
				case CMMS_S_TERMINATE:
					reply = gtcmtr_terminate(TRUE);
					break;
				case CMMS_E_TERMINATE:
					reply = gtcmtr_terminate(FALSE);
					break;
				case CMMS_U_LKEDELETE:
					reply = gtcmtr_lke_clearrep(curr_entry->clb_ptr, curr_entry->clb_ptr->mbf);
					break;
				case CMMS_U_LKESHOW:
					reply = gtcmtr_lke_showrep(curr_entry->clb_ptr, curr_entry->clb_ptr->mbf);
					break;
				case CMMS_B_BUFRESIZE:
					reply = CM_WRITE;
					value = *(unsigned short *)(curr_entry->clb_ptr->mbf + 1);
					if (value > curr_entry->clb_ptr->mbl)
					{
						free(curr_entry->clb_ptr->mbf);
						curr_entry->clb_ptr->mbf = malloc(value);
					}
					*curr_entry->clb_ptr->mbf = CMMS_C_BUFRESIZE;
					curr_entry->clb_ptr->mbl = value;
					curr_entry->clb_ptr->cbl = 1;
					break;
				case CMMS_B_BUFFLUSH:
					reply = gtcmtr_bufflush();
					break;
				case CMMS_Q_INCREMENT:
					reply = gtcmtr_increment();
					break;
				default:
					reply = FALSE;
					if (SS$_NORMAL == status)
                                                rts_error_csa(CSA_ARG(NULL)
							VARLSTCNT(3) ERR_BADGTMNETMSG, 1, (int)*curr_entry->clb_ptr->mbf);
					break;
			}
			if (curr_entry)		/* curr_entry can be NULL if went through gtcmtr_terminate */
			{
				status = sys$gettim(&curr_entry->lastact[0]);
				if (SS$_NORMAL != status)
					rts_error_csa(CSA_ARG(NULL) VARLSTCNT(1) status);
				/* curr_entry is used by gtcm_mbxread_ast to determine if it needs to defer the interrupt message */
				prev_curr_entry = curr_entry;
				if (CM_WRITE == reply)
				{	/* if ast == gtcm_write_ast, let it worry */
					curr_entry->clb_ptr->ast = gtcm_write_ast;
					curr_entry = EMPTY_QUEUE;
					cmi_write(prev_curr_entry->clb_ptr);
				} else
				{
					curr_entry = EMPTY_QUEUE;
					if (1 == (prev_curr_entry->int_cancel.laflag & 1))
					{  /* valid interrupt cancel msg, handle in gtcm_mbxread_ast */
						status = sys$dclast(gtcm_int_unpack, prev_curr_entry, 0);
						if (SS$_NORMAL != status)
							rts_error_csa(CSA_ARG(NULL) VARLSTCNT(1) status);
					} else  if (CM_READ == reply)
					{
						prev_curr_entry->clb_ptr->ast = gtcm_read_ast;
						cmi_read(prev_curr_entry->clb_ptr);
					}
				}
			}
		} else  if (1 < astq_dyn_avail)
		{
#			ifdef GTCM_REPTIM
			/* if reptim is not needed - and smw doesn't know why it would be - remove this	*/
			status = sys$schdwk(0, 0, &wait[0], &reptim[0]);
#			else
			status = sys$schdwk(0, 0, &wait[0], 0);
#			endif
			sys$hiber();
			sys$canwak(0, 0);
		}
		if (cm_timeout && (0 == gtcm_users))
                        sys$setimr(efn_ignore, closewait, gtcm_shutdown_ast, &cm_shutdown, 0);
	}
}