void WorldObjectEffect::init(WorldObjectStore *oStore, UINT maxThreads, UINT maxNumObjects) { initialized = true; objectStore = oStore; oStore->setWorldObjectEffect(this); // try to do all expensive operations like shader loading and PSO creation here // Create the pipeline state, which includes compiling and loading shaders. { createRootSigAndPSO(rootSignature, pipelineState); cbvAlignedSize = calcConstantBufferSize((UINT)sizeof(cbv)); createConstantBuffer((UINT)2 * cbvAlignedSize, L"objecteffect_cbv_resource"); // TODO setSingleCBVMode(maxThreads, maxNumObjects, sizeof(cbv), L"objecteffect_cbvsingle_resource"); // set cbv data: XMMATRIX ident = XMMatrixIdentity(); XMStoreFloat4x4(&cbv.wvp, ident); cbv.world = cbv.wvp; //memcpy(cbvGPUDest+cbvAlignedSize, &cbv, sizeof(cbv)); } // Create command allocators and command lists for each frame. static LPCWSTR fence_names[XApp::FrameCount] = { L"fence_objecteffect_0", L"fence_objecteffect_1", L"fence_objecteffect_2" }; for (UINT n = 0; n < XApp::FrameCount; n++) { ThrowIfFailed(xapp().device->CreateCommandAllocator(D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(&commandAllocators[n]))); ThrowIfFailed(xapp().device->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, commandAllocators[n].Get(), pipelineState.Get(), IID_PPV_ARGS(&commandLists[n]))); // Command lists are created in the recording state, but there is nothing // to record yet. The main loop expects it to be closed, so close it now. ThrowIfFailed(commandLists[n]->Close()); // init fences: //ThrowIfFailed(xapp().device->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(frameData[n].fence.GetAddressOf()))); ThrowIfFailed(xapp().device->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(&frameData[n].fence))); frameData[n].fence->SetName(fence_names[n]); frameData[n].fenceValue = 0; frameData[n].fenceEvent = CreateEventEx(nullptr, FALSE, FALSE, EVENT_ALL_ACCESS); if (frameData[n].fenceEvent == nullptr) { ThrowIfFailed(HRESULT_FROM_WIN32(GetLastError())); } } // init resources for update thread: ThrowIfFailed(xapp().device->CreateCommandAllocator(D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(&updateCommandAllocator))); ThrowIfFailed(xapp().device->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, updateCommandAllocator.Get(), pipelineState.Get(), IID_PPV_ARGS(&updateCommandList))); // Command lists are created in the recording state, but there is nothing // to record yet. The main loop expects it to be closed, so close it now. ThrowIfFailed(updateCommandList->Close()); // init fences: //ThrowIfFailed(xapp().device->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(frameData[n].fence.GetAddressOf()))); ThrowIfFailed(xapp().device->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(&updateFrameData.fence))); updateFrameData.fence->SetName(L"fence_objecteffect_update"); updateFrameData.fenceValue = 0; updateFrameData.fenceEvent = CreateEventEx(nullptr, FALSE, FALSE, EVENT_ALL_ACCESS); if (updateFrameData.fenceEvent == nullptr) { ThrowIfFailed(HRESULT_FROM_WIN32(GetLastError())); } }
btWin32Barrier() { mCounter = 0; mMaxCount = 1; mEnableCounter = 0; InitializeCriticalSection(&mExternalCriticalSection); InitializeCriticalSection(&mLocalCriticalSection); #ifdef WINRT mRunEvent = CreateEventEx(NULL, NULL, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); mNotifyEvent = CreateEventEx(NULL, NULL, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); #else mRunEvent = CreateEvent(NULL,TRUE,FALSE,NULL); mNotifyEvent = CreateEvent(NULL, TRUE, FALSE, NULL); #endif }
/* * verifyconnect() returns TRUE if the connect really has happened. */ _Use_decl_annotations_ VOID WINAPI CurlSleep(DWORD dwMilliseconds) { static HANDLE singletonEvent = NULL; HANDLE sleepEvent = singletonEvent; HANDLE previousEvent = NULL; // Demand create the event. if (!sleepEvent) { sleepEvent = CreateEventEx(NULL, NULL, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); if (!sleepEvent) return; previousEvent = InterlockedCompareExchangePointerRelease(&singletonEvent, sleepEvent, NULL); if (previousEvent) { // Back out if multiple threads try to demand create at the same time. CloseHandle(sleepEvent); sleepEvent = previousEvent; } } // Emulate sleep by waiting with timeout on an event that is never signalled. WaitForSingleObjectEx(sleepEvent, dwMilliseconds, false); }
HRESULT STDMETHODCALLTYPE CD3DX12AffinityFence::WaitOnFenceCompletion( UINT64 Value) { std::vector<HANDLE> Events; UINT EventCount = 0; for (UINT i = 0; i < D3DX12_MAX_ACTIVE_NODES;i++) { if (((1 << i) & mAffinityMask) != 0) { ID3D12Fence* Fence = mFences[i]; Events.push_back(0); Events[EventCount] = CreateEventEx(nullptr, FALSE, FALSE, EVENT_ALL_ACCESS); HRESULT const hr = Fence->SetEventOnCompletion(Value, Events[EventCount]); if (hr != S_OK) { return hr; } ++EventCount; } } WaitForMultipleObjects((DWORD)EventCount, &(Events[0]), TRUE, INFINITE); return S_OK; }
void resource_storage::init(ID3D12Device *device) { in_use = false; m_device = device; ram_framebuffer = nullptr; // Create a global command allocator CHECK_HRESULT(device->CreateCommandAllocator(D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(command_allocator.GetAddressOf()))); CHECK_HRESULT(m_device->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, command_allocator.Get(), nullptr, IID_PPV_ARGS(command_list.GetAddressOf()))); CHECK_HRESULT(command_list->Close()); D3D12_DESCRIPTOR_HEAP_DESC descriptor_heap_desc = { D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV, 10000, D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE }; CHECK_HRESULT(device->CreateDescriptorHeap(&descriptor_heap_desc, IID_PPV_ARGS(&descriptors_heap))); D3D12_DESCRIPTOR_HEAP_DESC sampler_heap_desc = { D3D12_DESCRIPTOR_HEAP_TYPE_SAMPLER , 2048, D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE }; CHECK_HRESULT(device->CreateDescriptorHeap(&sampler_heap_desc, IID_PPV_ARGS(&sampler_descriptor_heap[0]))); CHECK_HRESULT(device->CreateDescriptorHeap(&sampler_heap_desc, IID_PPV_ARGS(&sampler_descriptor_heap[1]))); D3D12_DESCRIPTOR_HEAP_DESC ds_descriptor_heap_desc = { D3D12_DESCRIPTOR_HEAP_TYPE_DSV , 10000}; device->CreateDescriptorHeap(&ds_descriptor_heap_desc, IID_PPV_ARGS(&depth_stencil_descriptor_heap)); D3D12_DESCRIPTOR_HEAP_DESC rtv_descriptor_heap_desc = { D3D12_DESCRIPTOR_HEAP_TYPE_RTV , 10000 }; device->CreateDescriptorHeap(&rtv_descriptor_heap_desc, IID_PPV_ARGS(&render_targets_descriptors_heap)); frame_finished_handle = CreateEventEx(nullptr, FALSE, FALSE, EVENT_ALL_ACCESS); fence_value = 0; CHECK_HRESULT(device->CreateFence(fence_value++, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(frame_finished_fence.GetAddressOf()))); }
int pthread_create(pthread_t* thread, const pthread_attr_t* attr, void* (*start_routine)(void*), void* arg) { fn* f; HANDLE evt = CreateEventEx(NULL, NULL, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); if(!thread || !start_routine) { return EFAULT; } if(attr) { return EINVAL; } f = calloc(1, sizeof(fn)); if(!f) { return ENOMEM; } f->fun = start_routine; f->context = arg; f->pth = thread; f->evt = evt; CloseHandle(CreateThread(NULL, 0, thread_proc, f, 0, NULL)); WaitForSingleObjectEx(evt, INFINITE, FALSE); CloseHandle(evt); return 0; }
void SDL_Delay(Uint32 ms) { /* Sleep() is not publicly available to apps in early versions of WinRT. * * Visual C++ 2013 Update 4 re-introduced Sleep() for Windows 8.1 and * Windows Phone 8.1. * * Use the compiler version to determine availability. * * NOTE #1: _MSC_FULL_VER == 180030723 for Visual C++ 2013 Update 3. * NOTE #2: Visual C++ 2013, when compiling for Windows 8.0 and * Windows Phone 8.0, uses the Visual C++ 2012 compiler to build * apps and libraries. */ #if defined(__WINRT__) && defined(_MSC_FULL_VER) && (_MSC_FULL_VER <= 180030723) static HANDLE mutex = 0; if (!mutex) { mutex = CreateEventEx(0, 0, 0, EVENT_ALL_ACCESS); } WaitForSingleObjectEx(mutex, ms, FALSE); #else if (!ticks_started) { SDL_TicksInit(); } Sleep(ms); #endif }
HRESULT WasapiWrap::Start(void) { BYTE *pData = nullptr; HRESULT hr = 0; assert(m_pcmData); assert(!m_shutdownEvent); m_shutdownEvent = CreateEventEx(nullptr, nullptr, 0, EVENT_MODIFY_STATE | SYNCHRONIZE); CHK(m_shutdownEvent); m_renderThread = CreateThread(nullptr, 0, RenderEntry, this, 0, nullptr); assert(m_renderThread); assert(m_renderClient); HRG(m_renderClient->GetBuffer(m_bufferFrameNum, &pData)); memset(pData, 0, m_bufferFrameNum * m_frameBytes); HRG(m_renderClient->ReleaseBuffer(m_bufferFrameNum, 0)); m_footerCount = 0; assert(m_audioClient); HRG(m_audioClient->Start()); end: return hr; }
static value sys_sleep( value f ) { val_check(f,number); gc_enter_blocking(); #ifdef HX_WINRT if (!tlsSleepEvent) tlsSleepEvent = CreateEventEx(nullptr, nullptr, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); WaitForSingleObjectEx(tlsSleepEvent, (int)(val_number(f)*1000), false); #elif defined(NEKO_WINDOWS) Sleep((DWORD)(val_number(f) * 1000)); #elif defined(EPPC) //TODO: Implement sys_sleep for EPPC #else { struct timespec t; struct timespec tmp; t.tv_sec = (int)val_number(f); t.tv_nsec = (int)((val_number(f) - t.tv_sec) * 1e9); while( nanosleep(&t,&tmp) == -1 ) { if( errno != EINTR ) { gc_exit_blocking(); return alloc_null(); } t = tmp; } } #endif gc_exit_blocking(); return alloc_bool(true); }
void __stdcall Sleep(_In_ DWORD dwMilliseconds) { static HANDLE singletonEvent = nullptr; HANDLE sleepEvent = singletonEvent; // Demand create the event. if (!sleepEvent) { sleepEvent = CreateEventEx(nullptr, nullptr, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); if (!sleepEvent) return; HANDLE previousEvent = InterlockedCompareExchangePointerRelease(&singletonEvent, sleepEvent, nullptr); if (previousEvent) { // Back out if multiple threads try to demand create at the same time. CloseHandle(sleepEvent); sleepEvent = previousEvent; } } // Emulate sleep by waiting with timeout on an event that is never signaled. WaitForSingleObjectEx(sleepEvent, dwMilliseconds, false); return; }
//-------------------------------------------------------------------------- void D3D12RenderWindow::Init(D3D12Renderer& kRenderer) noexcept { if ((!m_kNode.is_attach()) && m_spTargetWindow) { D3D12_COMMAND_QUEUE_DESC kQueueDesc = {}; kQueueDesc.Flags = D3D12_COMMAND_QUEUE_FLAG_NONE; kQueueDesc.Type = D3D12_COMMAND_LIST_TYPE_DIRECT; VE_ASSERT_GE(kRenderer.m_pkDevice->CreateCommandQueue(&kQueueDesc, IID_PPV_ARGS(&m_pkCommandQueue)), S_OK); DXGI_SWAP_CHAIN_DESC kSwapChainDesc = {}; kSwapChainDesc.BufferCount = D3D12Renderer::FRAME_COUNT; kSwapChainDesc.BufferDesc.Width = m_spTargetWindow->GetWidth(); kSwapChainDesc.BufferDesc.Height = m_spTargetWindow->GetHeight(); kSwapChainDesc.BufferDesc.Format = DXGI_FORMAT_R10G10B10A2_UNORM; kSwapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; kSwapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD; kSwapChainDesc.OutputWindow = (HWND)(m_spTargetWindow->GetNativeHandle()); kSwapChainDesc.SampleDesc.Count = 1; kSwapChainDesc.Windowed = TRUE; IDXGISwapChain* pkSwapChain; VE_ASSERT_GE(kRenderer.m_pkDXGIFactory->CreateSwapChain(m_pkCommandQueue, &kSwapChainDesc, &pkSwapChain), S_OK); VE_ASSERT_GE(pkSwapChain->QueryInterface(IID_PPV_ARGS(&m_pkSwapChain)), S_OK); VE_SAFE_RELEASE(pkSwapChain); VE_ASSERT(m_pkCommandQueue && m_pkSwapChain); for (uint32_t i(0); i < D3D12Renderer::FRAME_COUNT; ++i) { FrameCache& kFrame = m_akFrameCache[i]; VE_ASSERT_GE(m_pkSwapChain->GetBuffer(i, IID_PPV_ARGS(&kFrame.m_pkBufferResource)), S_OK); kFrame.m_hHandle.ptr = kRenderer.m_kRTVHeap.GetCPUStart().ptr + kRenderer.m_kRTVHeap.Alloc(); kRenderer.m_pkDevice->CreateRenderTargetView( kFrame.m_pkBufferResource, nullptr, kFrame.m_hHandle); VE_ASSERT_GE(kRenderer.m_pkDevice->CreateCommandAllocator( D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(&kFrame.m_pkDirectAllocator)), S_OK); VE_ASSERT_GE(kRenderer.m_pkDevice->CreateCommandAllocator( D3D12_COMMAND_LIST_TYPE_BUNDLE, IID_PPV_ARGS(&kFrame.m_pkBundleAllocator)), S_OK); kFrame.m_u64FenceValue = 0; VE_ASSERT_GE(kRenderer.m_pkDevice->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, kFrame.m_pkDirectAllocator, nullptr, IID_PPV_ARGS(&kFrame.m_pkTestList)), S_OK); VE_ASSERT_GE(kFrame.m_pkTestList->Close(), S_OK); } m_u64FenceValue = 0; VE_ASSERT_GE(kRenderer.m_pkDevice->CreateFence(m_u64FenceValue++, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(&m_pkFence)), S_OK); m_kFenceEvent = CreateEventEx(nullptr, FALSE, FALSE, EVENT_ALL_ACCESS); VE_ASSERT(m_kFenceEvent); const uint64_t u64FenceToWaitFor = m_u64FenceValue++; VE_ASSERT_GE(m_pkCommandQueue->Signal(m_pkFence, u64FenceToWaitFor), S_OK); VE_ASSERT_GE(m_pkFence->SetEventOnCompletion(u64FenceToWaitFor, m_kFenceEvent), S_OK); WaitForSingleObject(m_kFenceEvent, INFINITE); m_u32FramePtr = m_pkSwapChain->GetCurrentBackBufferIndex(); m_u64FrameIndex = 0; m_spTargetWindow->Show(); kRenderer.m_kRenderWindowList.attach_back(m_kNode); } }
// Update frame-based values. void D3D12Multithreading::OnUpdate() { m_timer.Tick(NULL); PIXSetMarker(m_commandQueue.Get(), 0, L"Getting last completed fence."); // Get current GPU progress against submitted workload. Resources still scheduled // for GPU execution cannot be modified or else undefined behavior will result. const UINT64 lastCompletedFence = m_fence->GetCompletedValue(); // Move to the next frame resource. m_currentFrameResourceIndex = (m_currentFrameResourceIndex + 1) % FrameCount; m_pCurrentFrameResource = m_frameResources[m_currentFrameResourceIndex]; // Make sure that this frame resource isn't still in use by the GPU. // If it is, wait for it to complete. if (m_pCurrentFrameResource->m_fenceValue > lastCompletedFence) { HANDLE eventHandle = CreateEventEx(nullptr, FALSE, FALSE, EVENT_ALL_ACCESS); if (eventHandle == nullptr) { ThrowIfFailed(HRESULT_FROM_WIN32(GetLastError())); } ThrowIfFailed(m_fence->SetEventOnCompletion(m_pCurrentFrameResource->m_fenceValue, eventHandle)); WaitForSingleObject(eventHandle, INFINITE); } m_cpuTimer.Tick(NULL); float frameTime = static_cast<float>(m_timer.GetElapsedSeconds()); float frameChange = 2.0f * frameTime; if (m_keyboardInput.leftArrowPressed) m_camera.RotateYaw(-frameChange); if (m_keyboardInput.rightArrowPressed) m_camera.RotateYaw(frameChange); if (m_keyboardInput.upArrowPressed) m_camera.RotatePitch(frameChange); if (m_keyboardInput.downArrowPressed) m_camera.RotatePitch(-frameChange); if (m_keyboardInput.animate) { for (int i = 0; i < NumLights; i++) { float direction = frameChange * pow(-1.0f, i); XMStoreFloat4(&m_lights[i].position, XMVector4Transform(XMLoadFloat4(&m_lights[i].position), XMMatrixRotationY(direction))); XMVECTOR eye = XMLoadFloat4(&m_lights[i].position); XMVECTOR at = { 0.0f, 8.0f, 0.0f }; XMStoreFloat4(&m_lights[i].direction, XMVector3Normalize(XMVectorSubtract(at, eye))); XMVECTOR up = { 0.0f, 1.0f, 0.0f }; m_lightCameras[i].Set(eye, at, up); m_lightCameras[i].Get3DViewProjMatrices(&m_lights[i].view, &m_lights[i].projection, 90.0f, static_cast<float>(m_width), static_cast<float>(m_height)); } } m_pCurrentFrameResource->WriteConstantBuffers(&m_viewport, &m_camera, m_lightCameras, m_lights); }
SyncImpl::SyncImpl() { #ifdef PX_WINMODERN getSync(this) = CreateEventEx(NULL, NULL, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); #else getSync(this) = CreateEvent(0,true,false,0); #endif }
inline QWaitConditionEvent() : priority(0), wokenUp(false) { #ifndef Q_OS_WINRT event = CreateEvent(NULL, TRUE, FALSE, NULL); #else event = CreateEventEx(NULL, NULL, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); #endif }
// // We can "Chat" if there's more than one capture device. // bool CWasapiChat::Initialize(bool UseInputDevice) { IMMDeviceEnumerator *deviceEnumerator; HRESULT hr = CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&deviceEnumerator)); if (FAILED(hr)) { MessageBox(_AppWindow, L"Unable to instantiate device enumerator", L"WASAPI Transport Initialize Failure", MB_OK); return false; } if (UseInputDevice) { _Flow = eCapture; } else { _Flow = eRender; } hr = deviceEnumerator->GetDefaultAudioEndpoint(_Flow, eCommunications, &_ChatEndpoint); deviceEnumerator->Release(); if (FAILED(hr)) { MessageBox(_AppWindow, L"Unable to retrieve default endpoint", L"WASAPI Transport Initialize Failure", MB_OK); return false; } // // Create our shutdown event - we want an auto reset event that starts in the not-signaled state. // _ShutdownEvent = CreateEventEx(NULL, NULL, 0, EVENT_MODIFY_STATE | SYNCHRONIZE); if (_ShutdownEvent == NULL) { MessageBox(_AppWindow, L"Unable to create shutdown event.", L"WASAPI Transport Initialize Failure", MB_OK); return false; } _AudioSamplesReadyEvent = CreateEventEx(NULL, NULL, 0, EVENT_MODIFY_STATE | SYNCHRONIZE); if (_ShutdownEvent == NULL) { MessageBox(_AppWindow, L"Unable to create samples ready event.", L"WASAPI Transport Initialize Failure", MB_OK); return false; } return true; }
HRESULT RunOnUIThread(CODE &&code, const ComPtr<ICoreDispatcher> &dispatcher) { ComPtr<IAsyncAction> asyncAction; HRESULT result = S_OK; boolean hasThreadAccess; result = dispatcher->get_HasThreadAccess(&hasThreadAccess); if (FAILED(result)) { return result; } if (hasThreadAccess) { return code(); } else { Event waitEvent( CreateEventEx(nullptr, nullptr, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS)); if (!waitEvent.IsValid()) { return E_FAIL; } HRESULT codeResult = E_FAIL; auto handler = Callback<AddFtmBase<IDispatchedHandler>::Type>([&codeResult, &code, &waitEvent] { codeResult = code(); SetEvent(waitEvent.Get()); return S_OK; }); result = dispatcher->RunAsync(CoreDispatcherPriority_Normal, handler.Get(), asyncAction.GetAddressOf()); if (FAILED(result)) { return result; } auto waitResult = WaitForSingleObjectEx(waitEvent.Get(), 10 * 1000, true); if (waitResult != WAIT_OBJECT_0) { // Wait 10 seconds before giving up. At this point, the application is in an // unrecoverable state (probably deadlocked). We therefore terminate the application // entirely. This also prevents stack corruption if the async operation is eventually // run. ERR() << "Timeout waiting for async action on UI thread. The UI thread might be blocked."; std::terminate(); return E_FAIL; } return codeResult; } }
static void Sleep(DWORD timeout) { static HANDLE mutex = 0; if ( ! mutex ) { mutex = CreateEventEx(0, 0, 0, EVENT_ALL_ACCESS); } WaitForSingleObjectEx(mutex, timeout, FALSE); }
QT_BEGIN_NAMESPACE QMutexPrivate::QMutexPrivate() { #ifndef Q_OS_WINRT event = CreateEvent(0, FALSE, FALSE, 0); #else event = CreateEventEx(0, NULL, 0, EVENT_ALL_ACCESS); #endif if (!event) qWarning("QMutexData::QMutexData: Cannot create event"); }
void ThreadAPI_Sleep(unsigned int milliseconds) { HANDLE handle = CreateEventEx(NULL, NULL, 0, EVENT_ALL_ACCESS); if (handle != NULL) { /* * Have to use at least 1 to cause a thread yield in case 0 is passed */ (void)WaitForSingleObjectEx(handle, milliseconds == 0 ? 1 : milliseconds, FALSE); (void)CloseHandle(handle); } }
SimpleConsole::SimpleConsole() : _apEvent(CreateEventEx(nullptr, nullptr, 0, WRITE_OWNER | EVENT_ALL_ACCESS)) { HRESULT hr = _apEvent.IsValid() ? S_OK : HRESULT_FROM_WIN32(GetLastError()); if (FAILED(hr)) { std::wcout << "Failed to create AP event: " << hr << std::endl; throw WlanHostedNetworkException("Create event failed", hr); } _hostedNetwork.RegisterListener(this); _hostedNetwork.RegisterPrompt(this); }
//---------------------------------------------------------------------------------------- amf_handle AMF_CDECL_CALL amf_create_event(amf_bool bInitiallyOwned, amf_bool bManualReset, const wchar_t* pName) { #if defined(METRO_APP) DWORD flags = ((bManualReset) ? CREATE_EVENT_MANUAL_RESET : 0) | ((bInitiallyOwned) ? CREATE_EVENT_INITIAL_SET : 0); return CreateEventEx(NULL, pName, flags, STANDARD_RIGHTS_ALL | EVENT_MODIFY_STATE); #else return CreateEventW(NULL, bManualReset == true, bInitiallyOwned == true, pName); #endif }
// // Initialize the capturer. // bool CWASAPICapture::Initialize(UINT32 EngineLatency) { // // Create our shutdown event - we want auto reset events that start in the not-signaled state. // _ShutdownEvent = CreateEventEx(NULL, NULL, 0, EVENT_MODIFY_STATE | SYNCHRONIZE); PersistentAssert(_ShutdownEvent != NULL, "CreateEventEx failed"); // // Create our stream switch event- we want auto reset events that start in the not-signaled state. // Note that we create this event even if we're not going to stream switch - that's because the event is used // in the main loop of the capturer and thus it has to be set. // _StreamSwitchEvent = CreateEventEx(NULL, NULL, 0, EVENT_MODIFY_STATE | SYNCHRONIZE); PersistentAssert(_StreamSwitchEvent != NULL, "CreateEventEx failed"); // // Now activate an IAudioClient object on our preferred endpoint and retrieve the mix format for that endpoint. // HRESULT hr = _Endpoint->Activate(__uuidof(IAudioClient), CLSCTX_INPROC_SERVER, NULL, reinterpret_cast<void **>(&_AudioClient)); PersistentAssert(SUCCEEDED(hr), "_Endpoint->Activate failed"); hr = CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&_DeviceEnumerator)); PersistentAssert(SUCCEEDED(hr), "CoCreateInstance failed"); // // Load the MixFormat. This may differ depending on the shared mode used // LoadFormat(); // // Remember our configured latency in case we'll need it for a stream switch later. // _EngineLatencyInMS = EngineLatency; InitializeAudioEngine(); return true; }
// // Initialize the capturer. // bool CWASAPICapture::Initialize(UINT32 EngineLatency) { // // Create our shutdown event - we want auto reset events that start in the not-signaled state. // _ShutdownEvent = CreateEventEx(NULL, NULL, 0, EVENT_MODIFY_STATE | SYNCHRONIZE); if (_ShutdownEvent == NULL) { printf_s("Unable to create shutdown event: %d.\n", GetLastError()); return false; } // // Now activate an IAudioClient object on our preferred endpoint and retrieve the mix format for that endpoint. // HRESULT hr = _Endpoint->Activate(__uuidof(IAudioClient), CLSCTX_INPROC_SERVER, NULL, reinterpret_cast<void **>(&_AudioClient)); if (FAILED(hr)) { printf_s("Unable to activate audio client: %x.\n", hr); return false; } hr = CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&_DeviceEnumerator)); if (FAILED(hr)) { printf_s("Unable to instantiate device enumerator: %x\n", hr); return false; } // // Load the MixFormat. This may differ depending on the shared mode used // if (!LoadFormat()) { printf_s("Failed to load the mix format \n"); return false; } // // Remember our configured latency // _EngineLatencyInMS = EngineLatency; if (!InitializeAudioEngine()) { return false; } return true; }
CTcpSocket::CTcpSocket() : TPTCPServer(this), TPTCPClient(this) { m_pPacketBuf = NULL; m_nBufSize = 0; m_nWritePos = 0; m_nReadPos = 0; m_pDisConnect = NULL; m_pReconnect = NULL; m_pNormalPacket = NULL; m_pRecvPakcet = NULL; m_pUserData = NULL; m_pListenSockFunc = NULL; m_pListenUserData = NULL; m_pListenSocket = NULL; CreateEventEx(m_hRecEvent, TRUE, FALSE); #ifdef NETSDK_VERSION_BOGUSSSL CreateEventEx(m_hSpecialEvent, FALSE, FALSE); m_nSSL = 0; #endif }
static ALCenum xaudio2_open_playback(ALCdevice *device, const ALCchar *deviceName) { HRESULT hr; XAudio2Data * data; if (!deviceName) { deviceName = xaudio2_device; } else if (strcmp(deviceName, xaudio2_device) != 0) { return ALC_INVALID_VALUE; } data = (XAudio2Data*)calloc(1, sizeof(*data)); if (data == NULL) return ALC_OUT_OF_MEMORY; device->ExtraData = data; hr = S_OK; data->MsgEvent = CreateEventEx(NULL, NULL, 0, EVENT_ACCESS_MASK); if (data->MsgEvent == NULL) hr = E_FAIL; if (SUCCEEDED(hr)) { ThreadRequest req = { data->MsgEvent , 0}; hr = E_FAIL; if (g_MsgQueue->PostMsg(TM_USER_OpenDevice, &req, device)) hr = WaitForResponseHR(&req); } if (FAILED(hr)) { if (data->MsgEvent != NULL) CloseHandle(data->MsgEvent); data->MsgEvent = NULL; free(data); device->ExtraData = NULL; return ALC_OUT_OF_MEMORY; } /*---------------------------------*/ device->szDeviceName = alc_strdup(deviceName); return ALC_NO_ERROR; }
void TextureStore::init() { // Create an empty root signature. { CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc; rootSignatureDesc.Init(0, nullptr, 0, nullptr, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT); ComPtr<ID3DBlob> signature; ComPtr<ID3DBlob> error; ThrowIfFailed(D3D12SerializeRootSignature(&rootSignatureDesc, D3D_ROOT_SIGNATURE_VERSION_1, &signature, &error)); ThrowIfFailed(xapp().device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&rootSignature))); } D3D12_INPUT_ELEMENT_DESC inputElementDescs[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 } }; // Describe and create the graphics pipeline state object (PSO). D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {}; psoDesc.InputLayout = { inputElementDescs, _countof(inputElementDescs) }; psoDesc.pRootSignature = rootSignature.Get(); psoDesc.RasterizerState = CD3DX12_RASTERIZER_DESC(D3D12_DEFAULT); psoDesc.BlendState = CD3DX12_BLEND_DESC(D3D12_DEFAULT); psoDesc.DepthStencilState.DepthEnable = FALSE; psoDesc.DepthStencilState.StencilEnable = FALSE; psoDesc.SampleMask = UINT_MAX; psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; psoDesc.NumRenderTargets = 1; psoDesc.RTVFormats[0] = DXGI_FORMAT_R8G8B8A8_UNORM; psoDesc.SampleDesc.Count = 1; //ThrowIfFailed(device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pipelineState))); #include "CompiledShaders/PostVS.h" //#include "CompiledShaders/PostPS.h" // test shade library functions //{ //D3DLoadModule() uses ID3D11Module //ComPtr<ID3DBlob> vShader; //ThrowIfFailed(D3DReadFileToBlob(L"", &vShader)); psoDesc.VS = { binShader_PostVS, sizeof(binShader_PostVS) }; ThrowIfFailed(xapp().device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pipelineState))); ThrowIfFailed(xapp().device->CreateCommandAllocator(D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(&commandAllocator))); ThrowIfFailed(xapp().device->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, commandAllocator.Get(), pipelineState.Get(), IID_PPV_ARGS(&commandList))); ThrowIfFailed(xapp().device->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(&updateFrameData.fence))); updateFrameData.fence->SetName(L"fence_texture_update"); updateFrameData.fenceValue = 0; updateFrameData.fenceEvent = CreateEventEx(nullptr, FALSE, FALSE, EVENT_ALL_ACCESS); if (updateFrameData.fenceEvent == nullptr) { ThrowIfFailed(HRESULT_FROM_WIN32(GetLastError())); } }
int PltCreateEvent(PLT_EVENT* event) { #if defined(LC_WINDOWS) *event = CreateEventEx(NULL, NULL, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); if (!*event) { return -1; } return 0; #else pthread_mutex_init(&event->mutex, NULL); pthread_cond_init(&event->cond, NULL); event->signalled = 0; return 0; #endif }
HRESULT DXManager::CreateFence() { HRESULT hr = S_FALSE; hr = m_Device->CreateFence(0, D3D12_FENCE_FLAG_NONE, __uuidof(ID3D12Fence), (void**)&m_Fence); if (FAILED(hr)) return hr; m_FenceEvent = CreateEventEx(NULL, FALSE, FALSE, EVENT_ALL_ACCESS); if (m_FenceEvent == NULL) return S_FALSE; m_FenceValue = 1; return hr; }
// // Event implementation // _PPLXIMP event_impl::event_impl() { static_assert(sizeof(HANDLE) <= sizeof(_M_impl), "HANDLE version mismatch"); #ifndef __cplusplus_winrt _M_impl = CreateEvent(NULL, true, false, NULL); #else _M_impl = CreateEventEx(NULL, NULL, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS); #endif // !__cplusplus_winrt if( _M_impl != NULL ) { ResetEvent(static_cast<HANDLE>(_M_impl)); } }
_agpu_fence *_agpu_fence::create(agpu_device *device) { std::unique_ptr<agpu_fence> fence(new agpu_fence()); fence->device = device; // Create transfer synchronization fence. if (FAILED(device->d3dDevice->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(&fence->fence)))) return false; // Create an event handle to use for frame synchronization. fence->event = CreateEventEx(nullptr, FALSE, FALSE, EVENT_ALL_ACCESS); if (fence->event == nullptr) return false; return fence.release(); }