GSTATUS DDFSemCreate ( PSEMAPHORE sem, char *name) { GSTATUS err = GSTAT_OK; STATUS status; G_ASSERT(!sem, 1); status = CSw_semaphore(&(sem->semaphore), CS_SEM_SINGLE, name); if (status == OK) status = CScnd_init(&sem->cond); if (status == OK) status = CScnd_name(&sem->cond, "DDF R/W semaphore"); sem->have = 0; if (status != OK) err = DDFStatusAlloc (E_DF0007_SEM_CANNOT_CREATE); return(err); }
/*{ ** Name: LGK_initialize() - initialize the lg/lk shared mem segment. ** ** Description: ** This routine is called by the LGinitialize or LKinitialize routine. IT ** assumes that a previous caller has allocated the shared memory segment. ** ** If it discovers that the shared memory segment has not yet been ** initialized, it calls the LG and LK initialize-memory routines to do so. ** ** Inputs: ** flag - bit mask of: ** LOCK_LGK_MEMORY to lock the shared data segment ** LGK_IS_CSP if process is CSP process this node. ** ** Outputs: ** sys_err - place for system-specific error information. ** ** Returns: ** OK - success ** !OK - failure (CS*() routine failure, segment not mapped, ...) ** ** History: ** Summer, 1992 (bryanp) ** Working on the new portable logging and locking system. ** 19-oct-1992 (bryanp) ** Check memory version number when attaching. ** 22-oct-1992 (bryanp) ** Change LGLKDATA.MEM to lglkdata.mem. ** 23-Oct-1992 (daveb) ** name the semaphore too. ** 13-feb-1993 (keving) ** Remove support for II_LGK_MEMORY_SIZE. If II_LG_MEMSIZE ** is not set then calculate memory size from PM values. ** 24-may-1993 (bryanp) ** If the shared memory is the wrong version, don't install the ** at_exit handlers (the rundown routines won't be able to interpret ** the memory properly). ** 26-jul-1993 (jnash) ** Add 'flag' param lock the LGK data segment. ** 20-sep-1993 (bryanp) ** In addition to calling PCatexit, call (on VMS) sys$dclexh, since ** there are some situations (image death and image rundown without ** process rundown) which are caught neither by PCatexit (since ** PCexit isn't run) nor by check-dead threads (since process ** rundown never happened). This fixes a hole where an access- ** violating ckpdb or auditdb command never got cleaned up. ** 31-jan-1994 (bryanp) ** Back out a few "features" which are proving countereffective: ** 1) Don't bother checking mem_creator_pid to see if the previous ** creator of the shared memory has died. This was an attempt to ** gracefully re-use sticky shared memory following a system crash, ** but it is suspected as being the culprit in a series of system ** failures by re-initializing the shared memory at inopportune ** times. ** 2) Don't complain if the shared memory already exists but is of a ** different size than you expected. Just go ahead and try to use ** it anyway. ** 21-feb-1994 (bryanp) ** Reverse item (1) of the above 31-jan-1994 change and re-enable the ** graceful re-use of shared memory. People weren't happy with ** having to run ipcclean and csinstall all the time. ** 23-may-1994 (bryanp) ** On VMS, disable ^Y for LG/LK-aware processes. We don't want to allow ** ^Y because you might interrupt the process right in the middle ** of an LG or LK operation, while holding the shared memory ** semaphore, and this would then wedge the whole installation. ** ** 17-May-1994 (daveb) 59127 ** Attach lgk_mem semaphore if we're attaching to the segment. ** 30-jan-1995 (lawst01) bug 61984 ** Use memory needed calculation from the 'lgk_calculate_size' ** function to determine the size of the shared memory pool for ** locking and locking. If the II_LG_MEMSIZE variable is specified ** with a value larger than needed use the supplied value. If ** lgk_calculate_size is unable to calculate a size then use the ** magic number of 400000. In addition issue a warning message ** and continue executing in the event the number of pages ** allocated is less than the number requested. ** 24-apr-1997 (nanpr01) ** Reinstate Bryanp's change. In the process of fixing bug 61984 ** by Steve Lawrence and subsequent undo of Steve's fix by Nick ** Ireland on 25-jun-96 (nick) caused the if 0 code removed. ** Part of the Steve's change was not reinstated such as not returning ** the status and exit and continue. ** 1. Don't complain if the shared memory already exists but is of a ** different size than you expected. Just go ahead and try to use ** it. ** 18-aug-1998 (hweho01) ** Reclaim the kernel resource if LG/LK shared memory segment is ** reinitialized. If the shared segment is re-used (the previous creator ** of the shared segment has died), the cross-process semaphores get ** initialized more than once at the same locations. That cause the ** kernel resource leaks on DG/UX (OS release 4.11MU04). To fix the ** problem, CS_cp_sem_cleanup() is called to destroy all the ** semaphores before LG/LK shraed segment get recreated. ** CS_cp_sem_cleanup() is made dependent on xCL_NEED_SEM_CLEANUP and ** OS_THREADS_USED, it returns immediately for most platforms. ** 27-Mar-2000 (jenjo02) ** Added test for crossed thread types, refuse connection ** to LGK memory with E_DMA811_LGK_MT_MISMATCH. ** 18-apr-2001 (devjo01) ** s103715 (Portable cluster support) ** - Add CX mem requirement calculations. ** - Add LGK_IS_CSP flag to indicate that LGK memory is being ** initialized for a CSP process. ** - Add basic CX initialization. ** 19-sep-2002 (devjo01) ** If running NUMA clustered allocate memory out of local RAD. ** 30-Apr-2003 (jenjo02) ** Rearchitected to silence long-tolerated race conditions. ** BUG 110121. ** 27-feb-2004 (devjo01) ** Rework allocation of CX shared memory to be compatible ** with race condition fix introduced for bug 110121. ** 29-Dec-2008 (jonj) ** If lgk_calculate_size() returns FAIL, the total memory ** needed exceeds MAX_SIZE_TYPE and we can't continue, but ** tell what we can about the needs of the various bits of ** memory before quitting. ** 06-Aug-2009 (wanfr01) ** Bug 122418 - Return E_DMA812 if LOCK_LGK_MUST_ATTACH is ** is passed in and memory segment does not exist ** 20-Nov-2009 (maspa05) bug 122642 ** In order to synchronize creation of UUIDs across servers added ** a semaphore and a 'last time' variable into LGK memory. ** 14-Dec-2009 (maspa05) bug 122642 ** #ifdef out the above change for Windows. The rest of the change ** does not apply to Windows so the variables aren't defined. */ STATUS LGK_initialize( i4 flag, CL_ERR_DESC *sys_err, char *lgk_info) { PTR ptr; SIZE_TYPE memleft; SIZE_TYPE size; STATUS ret_val; STATUS mem_exists; char mem_name[15]; SIZE_TYPE allocated_pages; i4 me_flags; i4 me_locked_flag; SIZE_TYPE memory_needed; char *nm_string; SIZE_TYPE pages; LGK_MEM *lgk_mem; i4 err_code; SIZE_TYPE min_memory; i4 retries; i4 i; i4 attached; PID *my_pid_slot; i4 clustered; u_i4 nodes; SIZE_TYPE cxmemreq; PTR pcxmem; LGLK_INFO lgkcount; char instid[4]; CL_CLEAR_ERR(sys_err); /* ** if LGK_base is set then this routine has already been called. It is ** set up so that both LGiniitalize and LKinitialize calls it, but only ** the first call does anything. */ if (LGK_base.lgk_mem_ptr) return(OK); PCpid( &LGK_my_pid ); memory_needed = 0; NMgtAt("II_LG_MEMSIZE", &nm_string); if (nm_string && *nm_string) #if defined(LP64) if (CVal8(nm_string, (long*)&memory_needed)) #else if (CVal(nm_string, (i4 *)&memory_needed)) #endif /* LP64 */ memory_needed = 0; /* Always calculate memory needed from PM resource settings */ /* and compare with supplied value, if supplied value is less */ /* than minimum then use minimum */ min_memory = 0; if ( OK == lgk_get_counts(&lgkcount, FALSE)) { if ( lgk_calculate_size(FALSE, &lgkcount, &min_memory) ) { /* ** Memory exceeds MAX_SIZE_TYPE, can't continue. ** ** Do calculation again, this time with "wordy" ** so user can see allocation bits, then quit. */ lgk_calculate_size(TRUE, &lgkcount, &min_memory); return (E_DMA802_LGKINIT_ERROR); } } if (min_memory) memory_needed = (memory_needed < min_memory) ? min_memory : memory_needed; else memory_needed = (memory_needed < 400000 ) ? 400000 : memory_needed; clustered = (i4)CXcluster_enabled(); cxmemreq = 0; if ( clustered ) { if ( OK != CXcluster_nodes( &nodes, NULL ) ) nodes = 0; cxmemreq = CXshm_required( 0, nodes, lgkcount.lgk_max_xacts, lgkcount.lgk_max_locks, lgkcount.lgk_max_resources ); if ( MAX_SIZE_TYPE - memory_needed < cxmemreq ) { /* ** Memory exceeds MAX_SIZE_TYPE, can't continue. ** ** Do calculation again, this time with "wordy" ** so user can see allocation bits, then quit. */ SIprintf("Total LG/LK/CX allocation exceeds max of %lu bytes by %lu\n" "Adjust logging/locking configuration values and try again\n", MAX_SIZE_TYPE, cxmemreq - (MAX_SIZE_TYPE - memory_needed)); lgk_calculate_size(TRUE, &lgkcount, &min_memory); return (E_DMA802_LGKINIT_ERROR); } memory_needed += cxmemreq; } if ( memory_needed < MAX_SIZE_TYPE - ME_MPAGESIZE ) pages = (memory_needed + ME_MPAGESIZE - 1) / ME_MPAGESIZE; else pages = memory_needed / ME_MPAGESIZE; /* ** Lock the LGK segment if requested to do so */ if (flag & LOCK_LGK_MEMORY) me_locked_flag = ME_LOCKED_MASK; else me_locked_flag = 0; me_flags = (me_locked_flag | ME_MSHARED_MASK | ME_IO_MASK | ME_CREATE_MASK | ME_NOTPERM_MASK | ME_MZERO_MASK); if (CXnuma_user_rad()) me_flags |= ME_LOCAL_RAD; STcopy("lglkdata.mem", mem_name); /* ** In general, we just want to attach to the shared memory and detect if ** we are the first process to do so. However, there are ugly race ** conditions to consider, as well as complications because the shared ** memory may be left around following a system crash. ** ** First we attempt to create the shared memory. Usually it already exists, ** so we check for and handle the case of "already exists". */ /* ** (jenjo02) ** ** Restructured to better handle all those ugly race conditions ** which are easily reproduced by running two scripts, one that ** continuously executes "lockstat" while the other is starting ** and stopping Ingres. ** ** For example, ** ** lockstat A acquires and init's the memory ** RCP attaches to "A" memory ** lockstat A terminates normally ** lockstat B attaches to "A" memory, sees that ** "A"s pid is no longer alive, and ** reinitializes the memory, much to ** the RCP's chagrin. ** or (more commonly) ** ** lockstat A acquires and begins to init the mem ** RCP attaches to "A" memory which is ** still being zero-filled by lockstat, ** checks the version number (zero), ** and fails with a E_DMA434 mismatch. ** ** The fix utilizes the mem_ext_sem to synchronize multiple ** processes; if the semaphore hasn't been initialized or ** if mem_version_no is zero, we'll wait one second and retry, ** up to 60 seconds before giving up. This gives the creating ** process time to complete initialization of the memory. ** ** Up to LGK_MAX_PIDS are allowed to attach to the shared ** memory. When a process attaches it sets its PID in the ** first vacant slot in lgk_mem->mem_pid[]; if there are ** no vacant slots, the attach is refused. When the process ** terminates normally by calling LGK_rundown(), it zeroes ** its PID slot. ** ** When attaching to an existing segment, we check if ** there are any live processes still using the memory; ** if so, we can't destroy it (no matter who created it). ** If there are no live processes attached to the memory, ** we destroy and reallocate it (based on current config.dat ** settings). */ for ( retries = 0; ;retries++ ) { LGK_base.lgk_mem_ptr = (PTR)NULL; /* Give up if unable to get memory in one minute */ #if defined(conf_CLUSTER_BUILD) if (retries > 1) #else if ( retries ) #endif { if ( retries < 60 ) PCsleep(1000); else { /* Another process has it blocked way too long */ uleFormat(NULL, E_DMA800_LGKINIT_GETMEM, (CL_ERR_DESC *)NULL, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0); /* Unable to attach allocated shared memory segment. */ return (E_DMA802_LGKINIT_ERROR); } } ret_val = MEget_pages(me_flags, pages, mem_name, (PTR*)&lgk_mem, &allocated_pages, sys_err); if ( mem_exists = ret_val ) { if (ret_val == ME_ALREADY_EXISTS) { ret_val = MEget_pages((me_locked_flag | ME_MSHARED_MASK | ME_IO_MASK), pages, mem_name, (PTR*)&lgk_mem, &allocated_pages, sys_err); #if defined(conf_CLUSTER_BUILD) if (ret_val && !retries) continue; /* try one more time */ #endif } if (ret_val) { uleFormat(NULL, ret_val, sys_err, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0); uleFormat(NULL, E_DMA800_LGKINIT_GETMEM, (CL_ERR_DESC *)NULL, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0); /* Unable to attach allocated shared memory segment. */ return (E_DMA802_LGKINIT_ERROR); } } else if (flag & LOCK_LGK_MUST_ATTACH) { /* Do not use the shared segment you just allocated */ MEfree_pages((PTR)lgk_mem, allocated_pages, sys_err); return (E_DMA812_LGK_NO_SEGMENT); } size = allocated_pages * ME_MPAGESIZE; /* Expose this process to the memory */ LGK_base.lgk_mem_ptr = (PTR)lgk_mem; if ( mem_exists ) { /* ** Memory exists. ** ** Try to acquire the semaphore. If it's ** uninitialzed, retry from the top. ** ** If the version is zero, then another ** process is initializing the memory; ** keep retrying until the version is ** filled in. ** */ if ( ret_val = CSp_semaphore(1, &lgk_mem->mem_ext_sem) ) { if ( ret_val != E_CS000A_NO_SEMAPHORE ) { uleFormat(NULL, ret_val, sys_err, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0); ret_val = E_DMA802_LGKINIT_ERROR; break; } continue; } /* Retry if still being init'd by another process */ if ( !lgk_mem->mem_version_no ) { CSv_semaphore(&lgk_mem->mem_ext_sem); continue; } /* ** Check pids which appear to be attached to ** the memory: ** ** If any process is still alive, then we ** assume the memory is consistent and use it. ** ** If a process is now dead, it terminated ** without going through LGK_rundown ** to zero its PID slot, zero it now. ** ** If there are no live PIDs attached to ** the memory, we destroy and recreate it. */ my_pid_slot = (PID*)NULL; attached = 0; for ( i = 0; i < LGK_MAX_PIDS; i++ ) { if ( lgk_mem->mem_pid[i] && PCis_alive(lgk_mem->mem_pid[i]) ) { attached++; } else { /* Vacate the slot */ if (lgk_mem->mem_pid[i]) { uleFormat(NULL, E_DMA499_DEAD_PROCESS_INFO, (CL_ERR_DESC *)NULL, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 2, 0, lgk_mem->mem_pid[i], 0, lgk_mem->mem_info[i].info_txt); } lgk_mem->mem_pid[i] = (PID)0; lgk_mem->mem_info[i].info_txt[0] = EOS; /* Use first vacant slot for this process */ if ( !my_pid_slot ) { my_pid_slot = &lgk_mem->mem_pid[i]; LGK_base.lgk_pid_slot = i; } } /* Quit when both questions answered */ if ( attached && my_pid_slot ) break; } /* If no living pids attached, destroy/reallocate */ if ( !attached ) { CSv_semaphore(&lgk_mem->mem_ext_sem); if ( LGK_destroy(allocated_pages, sys_err) ) { ret_val = E_DMA802_LGKINIT_ERROR; break; } continue; } /* All attached pids alive? */ if ( !my_pid_slot ) { /* ... then there's no room for this process */ uleFormat(NULL, E_DMA80A_LGK_ATTACH_LIMIT, (CL_ERR_DESC *)NULL, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 1, 0, attached); ret_val = E_DMA802_LGKINIT_ERROR; } else if (lgk_mem->mem_version_no != LGK_MEM_VERSION_CURRENT) { uleFormat(NULL, E_DMA434_LGK_VERSION_MISMATCH, (CL_ERR_DESC *)NULL, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 2, 0, lgk_mem->mem_version_no, 0, LGK_MEM_VERSION_CURRENT); ret_val = E_DMA435_WRONG_LGKMEM_VERSION; } /* ** Don't allow mixed connections of MT/non-MT processes. ** Among other things, the mutexing mechanisms are ** incompatible! */ else if ( (CS_is_mt() && (lgk_mem->mem_status & LGK_IS_MT) == 0) || (!CS_is_mt() && lgk_mem->mem_status & LGK_IS_MT) ) { uleFormat(NULL, E_DMA811_LGK_MT_MISMATCH, (CL_ERR_DESC *)NULL, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 2, 0, (lgk_mem->mem_status & LGK_IS_MT) ? "OS" : "INTERNAL", 0, (CS_is_mt()) ? "OS" : "INTERNAL"); ret_val = E_DMA802_LGKINIT_ERROR; } else { /* ** CX memory (if any) will lie immediately past LGK header. */ pcxmem = (PTR)(lgk_mem + 1); pcxmem = (PTR)ME_ALIGN_MACRO(pcxmem, sizeof(ALIGN_RESTRICT)); LGK_base.lgk_lkd_ptr = (char *)LGK_base.lgk_mem_ptr + lgk_mem->mem_lkd; LGK_base.lgk_lgd_ptr = (char *)LGK_base.lgk_mem_ptr + lgk_mem->mem_lgd; /* Stuff our pid in first vacant slot */ *my_pid_slot = LGK_my_pid; STlcopy(lgk_info, lgk_mem->mem_info[i].info_txt, LGK_INFO_SIZE-1); } #if defined(VMS) || defined(UNIX) /* set up pointers to reference the uuid mutex and last time * variable */ if (!ID_uuid_sem_ptr) ID_uuid_sem_ptr=&lgk_mem->id_uuid_sem; if (!ID_uuid_last_time_ptr) ID_uuid_last_time_ptr=&lgk_mem->uuid_last_time; if (!ID_uuid_last_cnt_ptr) ID_uuid_last_cnt_ptr=&lgk_mem->uuid_last_cnt; #endif CSv_semaphore(&lgk_mem->mem_ext_sem); } else { /* Memory did not exist */ /* Zero the version to keep other processes out */ lgk_mem->mem_version_no = 0; #if defined(VMS) || defined(UNIX) /* set up the uuid mutex and last time pointers to * reference the objects in shared memory */ { STATUS id_stat; ID_uuid_sem_ptr=&lgk_mem->id_uuid_sem; ID_uuid_last_time_ptr=&lgk_mem->uuid_last_time; ID_uuid_last_cnt_ptr=&lgk_mem->uuid_last_cnt; *ID_uuid_last_cnt_ptr=0; ID_UUID_SEM_INIT(ID_uuid_sem_ptr,CS_SEM_MULTI,"uuid sem", &id_stat); } #endif /* ... then initialize the mutex */ CSw_semaphore(&lgk_mem->mem_ext_sem, CS_SEM_MULTI, "LGK mem ext sem" ); /* Record if memory created for MT or not */ if ( CS_is_mt() ) lgk_mem->mem_status = LGK_IS_MT; /* ** memory is as follows: ** ** -----------------------------------------------------------| ** | LGK_MEM struct (keep track of this mem) | ** | | ** -----------------------------------------------------------| ** | If a clustered installation memory reserved for CX | ** | | ** ------------------------------------------------------------ ** | LKD - database of info for lk system | ** | | ** ------------------------------------------------------------ ** | LGD - database of info for lg system | ** | | ** ------------------------------------------------------------ ** | memory manipulated by LGKm_* routines for structures used | ** | by both the lk and lg systems. | ** | | ** ------------------------------------------------------------ */ /* put the LGK_MEM struct at head of segment leaving ptr pointing ** at next aligned piece of memory */ /* ** CX memory (if any) will lie immediately past LGK header. */ pcxmem = (PTR)(lgk_mem + 1); pcxmem = (PTR)ME_ALIGN_MACRO(pcxmem, sizeof(ALIGN_RESTRICT)); LGK_base.lgk_lkd_ptr = pcxmem + cxmemreq; LGK_base.lgk_lkd_ptr = (PTR) ME_ALIGN_MACRO(LGK_base.lgk_lkd_ptr, sizeof(ALIGN_RESTRICT)); lgk_mem->mem_lkd = (i4)((char *)LGK_base.lgk_lkd_ptr - (char *)LGK_base.lgk_mem_ptr); LGK_base.lgk_lgd_ptr = (PTR) ((char *) LGK_base.lgk_lkd_ptr + sizeof(LKD)); LGK_base.lgk_lgd_ptr = (PTR) ME_ALIGN_MACRO(LGK_base.lgk_lgd_ptr, sizeof(ALIGN_RESTRICT)); lgk_mem->mem_lgd = (i4)((char *)LGK_base.lgk_lgd_ptr - (char *)LGK_base.lgk_mem_ptr); /* now initialize the rest of memory for allocation */ /* how much memory is left? */ ptr = ((char *)LGK_base.lgk_lgd_ptr + sizeof(LGD)); memleft = size - (((char *) ptr) - ((char *) LGK_base.lgk_mem_ptr)); if ( (ret_val = lgkm_initialize_mem(memleft, ptr)) == OK && (ret_val = LG_meminit(sys_err)) == OK && (ret_val = LK_meminit(sys_err)) == OK ) { /* Clear array of attached pids and pid info */ for ( i = 0; i < LGK_MAX_PIDS; i++ ) { lgk_mem->mem_pid[i] = (PID)0; lgk_mem->mem_info[i].info_txt[0] = EOS; } /* Set the creator pid */ LGK_base.lgk_pid_slot = 0; lgk_mem->mem_creator_pid = LGK_my_pid; /* Set the version, releasing other processes */ lgk_mem->mem_version_no = LGK_MEM_VERSION_CURRENT; } else { uleFormat(NULL, ret_val, (CL_ERR_DESC *)NULL, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0); ret_val = E_DMA802_LGKINIT_ERROR; /* Destroy the shared memory */ LGK_destroy(allocated_pages, sys_err); } } if ( ret_val == OK ) { PCatexit(LGK_rundown); if ( clustered ) { /* ** Perform preliminary cluster connection and CX memory init. */ /* Get installation code */ NMgtAt("II_INSTALLATION", &nm_string); if ( nm_string ) { instid[0] = *(nm_string); instid[1] = *(nm_string+1); } else { instid[0] = 'A'; instid[1] = 'A'; } instid[2] = '\0'; ret_val = CXinitialize( instid, pcxmem, flag & LGK_IS_CSP ); if ( ret_val ) { /* Report error returned from CX */ uleFormat(NULL, ret_val, (CL_ERR_DESC *)NULL, ULE_LOG, NULL, NULL, 0, NULL, &err_code, 0 ); break; } } #ifdef VMS { static $EXHDEF exit_block; i4 ctrl_y_mask = 0x02000000; /* ** On VMS, programs like the dmfjsp and logstat run as images in ** the shell process. That is, the system doesn't start and stop ** a process for each invocation of the program, it just starts ** and stops an image in the same process. This means that if ** the program should die, the image may be rundown but the process ** will remain, which means that the check-dead threads of other ** processes in the installation will not feel that they need to ** rundown this process, since it's still alive. ** ** By declaring an exit handler, which will get a chance to run ** even if PCexit isn't called, we improve our chances of getting ** to perform rundown processing if we should die unexpectedly. ** ** Furthermore, we ask DCL to disable its ^Y processing, which ** lessens the chance that the user will interrupt us while we ** are holding the semaphore. */ exit_block.exh$g_func = LGK_rundown; exit_block.exh$l_argcount = 1; exit_block.exh$gl_value = &exit_block.exh$l_status; if (sys$dclexh(&exit_block) != SS$_NORMAL) ret_val = FAIL; lib$disable_ctrl(&ctrl_y_mask, 0); } #endif } break; } if ( ret_val ) LGK_base.lgk_mem_ptr = NULL; return(ret_val); }
/*{ ** Name: gwf_init - initialize the gateway facility ** ** Description: ** This function performs general gateway initialization. Facility global ** structures are allocated and initialized. ** ** A ULM memory stream is set up for GWF for allocating the various GWF ** data structures. ** ** Gateway initialization exits are called to initialize the gateway. The ** identity of the initialization exits are obtained from Gwf_itab. ** ** Inputs: ** gw_rcb-> Standard GWF control block ** gwr_dmf_cptr The address of function "dmf_call()", so that ** we can call back to DMF for, e.g., extended ** catalog access. ** ** Output: ** gw_rcb-> Standard GWF control block ** gwr_out_vdata1 Release id of Gateway. ** gwr_scfcb_size size of CB for SCF to allocate per session. ** gwr_server set to the GwF_facility, for SCF to know. ** error-> ** err_code One of the following error numbers. ** E_GW0200_GWF_INIT_ERROR ** E_GW0600_NO_MEM ** ** Returns: ** E_DB_OK Function completed normally. ** E_DB_ERROR Cannot allocate Gwf_facility. ** E_DB_WARN Success, informational status sent back to DMF ** in the error.err_code field (either that there ** is no gateway initialized, or that none of the ** gateways needs transaction notification). ** ** History: ** 21-Apr-1989 (alexh) ** Created. ** 14-Dec-1989 (linda) ** Extended catalog table names were being filled in incorrectly; see ** comments below. ** 23-dec-89 (paul) ** Changed memory allocation strategy for the gateway. See comments ** embedded in code. ** 26-mar-90 (linda) ** Changed error handling. Changed to have one return point. ** 5-apr-90 (bryanp) ** Added improved calculation of GWF memory pool. Pool size is now ** scaled by number of users. ** 9-apr-90 (bryanp) ** This function is now called via gwf_call(), and takes a gw_rcb. ** 18-apr-90 (bryanp) ** If SCF says not enough memory, return proper error code. ** 27-sep-90 (linda) ** Set up pointer to tidp tuple for extended attribute tables, in ** support of gateway secondary indexes. ** 5-dec-90 (linda) ** Initialize the tcb semaphore. We were using it for locking the tcb ** list -- but it hadn't been initialized so locking was not working. ** 4-nov-91 (rickh) ** Return release identifier string at server initialization time. ** SCF spits up this string when poked with ** "select dbmsinfo( '_version' )" ** 7-oct-92 (daveb) ** fill in gwr_scfcb_size and gwr_server at init time so SCF ** can treat us as a first class citizen and make the session ** init calls. Prototyped. ** 23-Oct-1992 (daveb) ** name semaphore. ** 21-sep-92 (schang) ** initialize individual gateway specific server wide memory pointer ** 05-mar-97 (toumi01) ** initialize the global trace flags array ** 24-jul-97 (stial01) ** gwf_init() Set gwx_rcb.xrcb_gchdr_size before calling gateway init. */ DB_STATUS gwf_init( GW_RCB *gw_rcb ) { i4 i; SCF_CB scf_cb; DB_STATUS status; STATUS cl_status; /* zero out the release id descriptor */ MEfill(sizeof( DM_DATA ), 0, (PTR)&gw_rcb->gwr_out_vdata1 ); for (;;) /* Something to break out of... */ { /* allocate Gwf_facility */ scf_cb.scf_type = SCF_CB_TYPE; scf_cb.scf_length = sizeof(SCF_CB); scf_cb.scf_session = DB_NOSESSION; scf_cb.scf_facility = DB_GWF_ID; scf_cb.scf_scm.scm_functions = 0; scf_cb.scf_scm.scm_in_pages = (sizeof(GW_FACILITY)/SCU_MPAGESIZE+1); if ((status = scf_call(SCU_MALLOC, &scf_cb)) != E_DB_OK) { gwf_error(scf_cb.scf_error.err_code, GWF_INTERR, 0); gwf_error(E_GW0300_SCU_MALLOC_ERROR, GWF_INTERR, 1, sizeof(scf_cb.scf_scm.scm_in_pages), &scf_cb.scf_scm.scm_in_pages); switch (scf_cb.scf_error.err_code) { case E_SC0004_NO_MORE_MEMORY: case E_SC0005_LESS_THAN_REQUESTED: case E_SC0107_BAD_SIZE_EXPAND: gw_rcb->gwr_error.err_code = E_GW0600_NO_MEM; break; default: gw_rcb->gwr_error.err_code = E_GW0200_GWF_INIT_ERROR; break; } break; } Gwf_facility = (GW_FACILITY *)scf_cb.scf_scm.scm_addr; Gwf_facility->gwf_tcb_list = NULL; cl_status = CSw_semaphore(&Gwf_facility->gwf_tcb_lock, CS_SEM_SINGLE, "GWF TCB sem" ); if (cl_status != OK) { gwf_error(cl_status, GWF_INTERR, 0); gw_rcb->gwr_error.err_code = E_GW0200_GWF_INIT_ERROR; status = E_DB_ERROR; break; } /* ** Initialize memory allocation scheme for GWF. We have the following ** memory allocation scheme. ** ** 1. TCB ** Allocated directly by SCF. Allocation and deallocation ** is controlled directly by GWF. It loks like TCB's are ** held until they are no longer valid (due to a DROP or ** REGISTER INDEX) or until the server shuts down. It's ** not clear this is the best allocation strategy. ** ** 2. SCB ** The session control block is allocated within its own ** ULM memory stream. Since there is no other information ** that lives for the entire session, this is the only ** information handled by this memory stream. The stream ** id is stored in the SCB. ** ** 3. RSB ** The record control blocks containing information for a ** particular access to a gateway table are allocated from ** a separate stream initialized at the time the table is ** "opened" for access and deleted at the time the table ** is "closed". The stream id is stored in the RSB. ** ** 4. Temporary Memory ** Memory needed for a single operation such as ** registering a table is allocated from a ULF memory ** stream. Such srteams must be opened and closed within a ** single invocation of the GWF. ** ** At this time we initialize the pool from which ULM streams will be ** allocated. */ Gwf_facility->gwf_ulm_rcb.ulm_facility = DB_GWF_ID; Gwf_facility->gwf_ulm_rcb.ulm_blocksize = SCU_MPAGESIZE; Gwf_facility->gwf_ulm_rcb.ulm_sizepool = gwf_def_pool_size(); status = ulm_startup(&Gwf_facility->gwf_ulm_rcb); if (status != E_DB_OK) { gwf_error(Gwf_facility->gwf_ulm_rcb.ulm_error.err_code, GWF_INTERR, 0); gwf_error(E_GW0310_ULM_STARTUP_ERROR, GWF_INTERR, 1, sizeof(Gwf_facility->gwf_ulm_rcb.ulm_sizepool), &Gwf_facility->gwf_ulm_rcb.ulm_sizepool); if (Gwf_facility->gwf_ulm_rcb.ulm_error.err_code == E_UL0005_NOMEM) gw_rcb->gwr_error.err_code = E_GW0600_NO_MEM; else gw_rcb->gwr_error.err_code = E_GW0200_GWF_INIT_ERROR; break; } Gwf_facility->gwf_gw_active = 0; /* assume no gateways. */ Gwf_facility->gwf_gw_xacts = 0; /* and no transaction handling */ /* initialize the global trace flags array */ MEfill(sizeof(Gwf_facility->gwf_trace), 0, (PTR)Gwf_facility->gwf_trace); /* initialize each gateway's exit vector */ for (i=0; i < GW_GW_COUNT; ++i) { GWX_RCB gwx_rcb; gwx_rcb.xrcb_gwf_version = GWX_VERSION; gwx_rcb.xrcb_exit_table = (GWX_VECTOR *)&Gwf_facility->gwf_gw_info[i].gwf_gw_exits[0]; gwx_rcb.xrcb_dmf_cptr = gw_rcb->gwr_dmf_cptr; gwx_rcb.xrcb_gca_cb = gw_rcb->gwr_gca_cb; /* ** schang: init new field xrcb_xhandle, this field passes ** individual gateway specific, server wide memory ** pointer (sep-21-1992) ** initialize xrcb_xbitset (aug-12-93) */ gwx_rcb.xrcb_xhandle = NULL; gwx_rcb.xrcb_xbitset = 0; MEfill(sizeof( DM_DATA ), 0, (PTR)&gwx_rcb.xrcb_var_data1 ); /* refer to Gwf_itab to decide which initializations are required */ if (Gwf_itab[i] == NULL) { Gwf_facility->gwf_gw_info[i].gwf_gw_exist = 0; } else if ((status = (*Gwf_itab[i])(&gwx_rcb)) == E_DB_OK) { /* schang : new memory pointer initialized */ Gwf_facility->gwf_gw_info[i].gwf_xhandle = gwx_rcb.xrcb_xhandle; Gwf_facility->gwf_gw_info[i].gwf_xbitset = gwx_rcb.xrcb_xbitset; Gwf_facility->gwf_gw_info[i].gwf_rsb_sz = gwx_rcb.xrcb_exit_cb_size; Gwf_facility->gwf_gw_info[i].gwf_xrel_sz = gwx_rcb.xrcb_xrelation_sz; Gwf_facility->gwf_gw_info[i].gwf_xatt_sz = gwx_rcb.xrcb_xattribute_sz; Gwf_facility->gwf_gw_info[i].gwf_xidx_sz = gwx_rcb.xrcb_xindex_sz; Gwf_facility->gwf_gw_info[i].gwf_gw_exist = 1; /* initialize extended catalog names */ STprintf((char *)&Gwf_facility->gwf_gw_info[i].gwf_xrel_tab_name, "iigw%02d_relation", i); STprintf((char *)&Gwf_facility->gwf_gw_info[i].gwf_xatt_tab_name, "iigw%02d_attribute", i); STprintf((char *)&Gwf_facility->gwf_gw_info[i].gwf_xidx_tab_name, "iigw%02d_index", i); /* pass the release identifier up to SCF */ if ( gwx_rcb.xrcb_var_data1.data_address != 0 ) { MEcopy( (PTR)&gwx_rcb.xrcb_var_data1, sizeof( DM_DATA ), (PTR)&gw_rcb->gwr_out_vdata1 ); } /* ** Now set up pointer to tidp tuple for this gateway's extended ** attribute catalog, to support gateway secondary indexes. */ Gwf_facility->gwf_gw_info[i].gwf_xatt_tidp = gwx_rcb.xrcb_xatt_tidp; /* ** Note, if >1 gateway is initialized, then if any gateway needs ** transaction notification, DMF will always notify. Also note, ** we check error.err_code here even though status is E_DB_OK. ** Not great, but I can't think of a better way... */ if (gwx_rcb.xrcb_error.err_code == E_GW0500_GW_TRANSACTIONS) Gwf_facility->gwf_gw_xacts = 1; Gwf_facility->gwf_gw_active = 1; } else /* status != E_DB_OK */ { gwf_error(gwx_rcb.xrcb_error.err_code, GWF_INTERR, 0); gw_rcb->gwr_error.err_code = E_GW0200_GWF_INIT_ERROR; break; } } gw_rcb->gwr_scfcb_size = sizeof(GW_SESSION); gw_rcb->gwr_server = (PTR)Gwf_facility; if (status != E_DB_OK) break; /* gateway exit failed */ /* ** Now that we're ready to go, assign global Dmf_cptr its value (== the ** address of function dmf_call()). We need to do this to remove ** explicit calls to the DMF facility, resolving circular references of ** shareable libraries when building. */ Dmf_cptr = gw_rcb->gwr_dmf_cptr; break; } if (status != E_DB_OK) { return(status); } else { gw_rcb->gwr_error.err_code = E_DB_OK; return(E_DB_OK); } }