nsresult PluginPRLibrary::NP_Initialize(NPNetscapeFuncs* bFuncs, NPPluginFuncs* pFuncs, NPError* error) { JNIEnv* env = GetJNIForThread(); if (!env) return NS_ERROR_FAILURE; if (mNP_Initialize) { *error = mNP_Initialize(bFuncs, pFuncs, env); } else { NP_InitializeFunc pfNP_Initialize = (NP_InitializeFunc) PR_FindFunctionSymbol(mLibrary, "NP_Initialize"); if (!pfNP_Initialize) return NS_ERROR_FAILURE; *error = pfNP_Initialize(bFuncs, pFuncs, env); } // Save pointers to functions that get called through PluginLibrary itself. mNPP_New = pFuncs->newp; mNPP_GetValue = pFuncs->getvalue; mNPP_ClearSiteData = pFuncs->clearsitedata; mNPP_GetSitesWithData = pFuncs->getsiteswithdata; return NS_OK; }
jclass anp_system_loadJavaClass(NPP instance, const char* className) { LOG("%s", __PRETTY_FUNCTION__); JNIEnv* env = GetJNIForThread(); if (!env) return nullptr; jclass cls = env->FindClass("org/mozilla/gecko/GeckoAppShell"); jmethodID method = env->GetStaticMethodID(cls, "loadPluginClass", "(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/Class;"); // pass libname and classname, gotta create java strings nsNPAPIPluginInstance* pinst = static_cast<nsNPAPIPluginInstance*>(instance->ndata); mozilla::PluginPRLibrary* lib = static_cast<mozilla::PluginPRLibrary*>(pinst->GetPlugin()->GetLibrary()); nsCString libName; lib->GetLibraryPath(libName); jstring jclassName = env->NewStringUTF(className); jstring jlibName = env->NewStringUTF(libName.get()); jobject obj = env->CallStaticObjectMethod(cls, method, jclassName, jlibName); env->DeleteLocalRef(jlibName); env->DeleteLocalRef(jclassName); env->DeleteLocalRef(cls); return reinterpret_cast<jclass>(obj); }
void anp_audio_start(ANPAudioTrack* s) { if (s == NULL || s->output_unit == NULL) { return; } if (s->keepGoing) { // we are already playing. Ignore. return; } JNIEnv *jenv = GetJNIForThread(); if (!jenv) return; mozilla::AutoLocalJNIFrame autoFrame(jenv, 0); jenv->CallVoidMethod(s->output_unit, at.play); if (autoFrame.CheckForException()) { jenv->DeleteGlobalRef(s->at_class); free(s); return; } s->isStopped = false; s->keepGoing = true; // AudioRunnable now owns the ANPAudioTrack nsRefPtr<AudioRunnable> runnable = new AudioRunnable(s); nsCOMPtr<nsIThread> thread; NS_NewThread(getter_AddRefs(thread), runnable); }
int sa_stream_open(sa_stream_t *s) { if (s == NULL) { return SA_ERROR_NO_INIT; } if (s->output_unit != NULL) { return SA_ERROR_INVALID; } JNIEnv *jenv = GetJNIForThread(); if (!jenv) return SA_ERROR_NO_DEVICE; if ((*jenv)->PushLocalFrame(jenv, 4)) { return SA_ERROR_OOM; } s->at_class = init_jni_bindings(jenv); int32_t chanConfig = s->channels == 1 ? CHANNEL_OUT_MONO : CHANNEL_OUT_STEREO; jobject obj = (*jenv)->NewObject(jenv, s->at_class, at.constructor, STREAM_MUSIC, s->rate, chanConfig, ENCODING_PCM_16BIT, s->bufferSize, MODE_STREAM); jthrowable exception = (*jenv)->ExceptionOccurred(jenv); if (exception) { (*jenv)->ExceptionDescribe(jenv); (*jenv)->ExceptionClear(jenv); (*jenv)->DeleteGlobalRef(jenv, s->at_class); (*jenv)->PopLocalFrame(jenv, NULL); return SA_ERROR_INVALID; } if (!obj) { (*jenv)->DeleteGlobalRef(jenv, s->at_class); (*jenv)->PopLocalFrame(jenv, NULL); return SA_ERROR_OOM; } s->output_unit = (*jenv)->NewGlobalRef(jenv, obj); (*jenv)->PopLocalFrame(jenv, NULL); ALOG("%x - New stream %d %d", s, s->rate, s->channels); return SA_SUCCESS; }
void anp_audio_pause(ANPAudioTrack* s) { if (s == nullptr || s->output_unit == nullptr) { return; } JNIEnv *jenv = GetJNIForThread(); mozilla::AutoLocalJNIFrame autoFrame(jenv, 0); jenv->CallVoidMethod(s->output_unit, at.pause); }
void anp_audio_stop(ANPAudioTrack* s) { if (s == nullptr || s->output_unit == nullptr) { return; } s->isStopped = true; JNIEnv *jenv = GetJNIForThread(); mozilla::AutoLocalJNIFrame autoFrame(jenv, 0); jenv->CallVoidMethod(s->output_unit, at.stop); }
int sa_stream_set_volume_abs(sa_stream_t *s, float vol) { if (s == NULL || s->output_unit == NULL) { return SA_ERROR_NO_INIT; } JNIEnv *jenv = GetJNIForThread(); (*jenv)->CallIntMethod(jenv, s->output_unit, at.setvol, (jfloat)vol, (jfloat)vol); return SA_SUCCESS; }
AudioDataDecoder(const AudioInfo& aConfig, MediaFormat::Param aFormat, MediaDataDecoderCallback* aCallback) : MediaCodecDataDecoder(MediaData::Type::AUDIO_DATA, aConfig.mMimeType, aFormat, aCallback) { JNIEnv* env = GetJNIForThread(); jni::Object::LocalRef buffer(env); NS_ENSURE_SUCCESS_VOID(aFormat->GetByteBuffer(NS_LITERAL_STRING("csd-0"), &buffer)); if (!buffer && aConfig.mCodecSpecificConfig->Length() >= 2) { buffer = jni::Object::LocalRef::Adopt(env, env->NewDirectByteBuffer(aConfig.mCodecSpecificConfig->Elements(), aConfig.mCodecSpecificConfig->Length())); NS_ENSURE_SUCCESS_VOID(aFormat->SetByteBuffer(NS_LITERAL_STRING("csd-0"), buffer)); } }
int sa_stream_get_position(sa_stream_t *s, sa_position_t position, int64_t *pos) { if (s == NULL || s->output_unit == NULL) { return SA_ERROR_NO_INIT; } ALOG("%x - get position", s); JNIEnv *jenv = GetJNIForThread(); *pos = (*jenv)->CallIntMethod(jenv, s->output_unit, at.getpos); /* android returns number of frames, so: position = frames * (PCM_16_BIT == 2 bytes) * channels */ *pos *= s->channels * sizeof(int16_t); return SA_SUCCESS; }
int sa_stream_destroy(sa_stream_t *s) { if (s == NULL) { return SA_ERROR_NO_INIT; } JNIEnv *jenv = GetJNIForThread(); if (!jenv) return SA_SUCCESS; (*jenv)->DeleteGlobalRef(jenv, s->output_unit); (*jenv)->DeleteGlobalRef(jenv, s->at_class); free(s); ALOG("%x - Stream destroyed", s); return SA_SUCCESS; }
int sa_stream_resume(sa_stream_t *s) { if (s == NULL || s->output_unit == NULL) { return SA_ERROR_NO_INIT; } ALOG("%x - resume", s); JNIEnv *jenv = GetJNIForThread(); s->isPaused = 0; /* Update stats */ struct timespec current_time; clock_gettime(CLOCK_REALTIME, ¤t_time); int64_t ticker = current_time.tv_sec * 1000 + current_time.tv_nsec / 1000000; s->lastStartTime = ticker; (*jenv)->CallVoidMethod(jenv, s->output_unit, at.play); return SA_SUCCESS; }
int sa_stream_pause(sa_stream_t *s) { if (s == NULL || s->output_unit == NULL) { return SA_ERROR_NO_INIT; } JNIEnv *jenv = GetJNIForThread(); s->isPaused = 1; /* Update stats */ if (s->lastStartTime != 0) { /* if lastStartTime is not zero, so playback has started */ struct timespec current_time; clock_gettime(CLOCK_REALTIME, ¤t_time); int64_t ticker = current_time.tv_sec * 1000 + current_time.tv_nsec / 1000000; s->timePlaying += ticker - s->lastStartTime; } ALOG("%x - Pause total time playing: %lld total written: %lld", s, s->timePlaying, s->amountWritten); (*jenv)->CallVoidMethod(jenv, s->output_unit, at.pause); return SA_SUCCESS; }
JNIEnv* jsjni_GetJNIForThread() { return GetJNIForThread(); }
ANPAudioTrack* anp_audio_newTrack(uint32_t sampleRate, // sampling rate in Hz ANPSampleFormat format, int channelCount, // MONO=1, STEREO=2 ANPAudioCallbackProc proc, void* user) { ANPAudioTrack *s = (ANPAudioTrack*) malloc(sizeof(ANPAudioTrack)); if (s == NULL) { return NULL; } JNIEnv *jenv = GetJNIForThread(); if (!jenv) return NULL; s->at_class = init_jni_bindings(jenv); s->rate = sampleRate; s->channels = channelCount; s->bufferSize = s->rate * s->channels; s->isStopped = true; s->keepGoing = false; s->user = user; s->proc = proc; s->format = format; int jformat; switch (format) { case kPCM16Bit_ANPSampleFormat: jformat = ENCODING_PCM_16BIT; break; case kPCM8Bit_ANPSampleFormat: jformat = ENCODING_PCM_8BIT; break; default: LOG("Unknown audio format. defaulting to 16bit."); jformat = ENCODING_PCM_16BIT; break; } int jChannels; switch (channelCount) { case 1: jChannels = CHANNEL_OUT_MONO; break; case 2: jChannels = CHANNEL_OUT_STEREO; break; default: LOG("Unknown channel count. defaulting to mono."); jChannels = CHANNEL_OUT_MONO; break; } mozilla::AutoLocalJNIFrame autoFrame(jenv); jobject obj = jenv->NewObject(s->at_class, at.constructor, STREAM_MUSIC, s->rate, jChannels, jformat, s->bufferSize, MODE_STREAM); if (autoFrame.CheckForException() || obj == NULL) { jenv->DeleteGlobalRef(s->at_class); free(s); return NULL; } jint state = jenv->CallIntMethod(obj, at.getstate); if (autoFrame.CheckForException() || state == STATE_UNINITIALIZED) { jenv->DeleteGlobalRef(s->at_class); free(s); return NULL; } s->output_unit = jenv->NewGlobalRef(obj); return s; }
NS_IMETHODIMP AudioRunnable::Run() { JNIEnv* jenv = GetJNIForThread(); if (!jenv) return NS_ERROR_FAILURE; mozilla::AutoLocalJNIFrame autoFrame(jenv, 2); jbyteArray bytearray = jenv->NewByteArray(mTrack->bufferSize); if (!bytearray) { LOG("AudioRunnable:: Run. Could not create bytearray"); return NS_ERROR_FAILURE; } jbyte *byte = jenv->GetByteArrayElements(bytearray, NULL); if (!byte) { LOG("AudioRunnable:: Run. Could not create bytearray"); return NS_ERROR_FAILURE; } ANPAudioBuffer buffer; buffer.channelCount = mTrack->channels; buffer.format = mTrack->format; buffer.bufferData = (void*) byte; while (mTrack->keepGoing) { // reset the buffer size buffer.size = mTrack->bufferSize; // Get data from the plugin mTrack->proc(kMoreData_ANPAudioEvent, mTrack->user, &buffer); if (buffer.size == 0) { LOG("%p - kMoreData_ANPAudioEvent", mTrack); continue; } size_t wroteSoFar = 0; jint retval; do { retval = jenv->CallIntMethod(mTrack->output_unit, at.write, bytearray, wroteSoFar, buffer.size - wroteSoFar); if (retval < 0) { LOG("%p - Write failed %d", mTrack, retval); break; } wroteSoFar += retval; } while(wroteSoFar < buffer.size); } jenv->CallVoidMethod(mTrack->output_unit, at.release); jenv->DeleteGlobalRef(mTrack->output_unit); jenv->DeleteGlobalRef(mTrack->at_class); free(mTrack); jenv->ReleaseByteArrayElements(bytearray, byte, 0); return NS_OK; }
int sa_stream_write(sa_stream_t *s, const void *data, size_t nbytes) { if (s == NULL || s->output_unit == NULL) { return SA_ERROR_NO_INIT; } if (nbytes == 0) { return SA_SUCCESS; } JNIEnv *jenv = GetJNIForThread(); if ((*jenv)->PushLocalFrame(jenv, 2)) { return SA_ERROR_OOM; } jbyteArray bytearray = (*jenv)->NewByteArray(jenv, nbytes); if (!bytearray) { (*jenv)->ExceptionClear(jenv); (*jenv)->PopLocalFrame(jenv, NULL); return SA_ERROR_OOM; } jbyte *byte = (*jenv)->GetByteArrayElements(jenv, bytearray, NULL); if (!byte) { (*jenv)->PopLocalFrame(jenv, NULL); return SA_ERROR_OOM; } memcpy(byte, data, nbytes); size_t wroteSoFar = 0; jint retval; do { retval = (*jenv)->CallIntMethod(jenv, s->output_unit, at.write, bytearray, wroteSoFar, nbytes - wroteSoFar); if (retval < 0) { ALOG("%x - Write failed %d", s, retval); break; } wroteSoFar += retval; if (wroteSoFar != nbytes) { /* android doesn't start playing until we explictly call play. */ if (!s->isPaused) sa_stream_resume(s); struct timespec ts = {0, 100000000}; /* .10s */ nanosleep(&ts, NULL); } } while(wroteSoFar < nbytes); s->amountWritten += nbytes; (*jenv)->ReleaseByteArrayElements(jenv, bytearray, byte, 0); (*jenv)->PopLocalFrame(jenv, NULL); return retval < 0 ? SA_ERROR_INVALID : SA_SUCCESS; }
void MediaCodecDataDecoder::DecoderLoop() { bool outputDone = false; bool draining = false; bool waitingEOF = false; AutoLocalJNIFrame frame(GetJNIForThread(), 1); nsRefPtr<MediaRawData> sample; MediaFormat::LocalRef outputFormat(frame.GetEnv()); nsresult res; for (;;) { { MonitorAutoLock lock(mMonitor); while (!mStopping && !mDraining && !mFlushing && mQueue.empty()) { if (mQueue.empty()) { // We could be waiting here forever if we don't signal that we need more input ENVOKE_CALLBACK(InputExhausted); } lock.Wait(); } if (mStopping) { // Get out of the loop. This is the only exit point. break; } if (mFlushing) { mDecoder->Flush(); ClearQueue(); mFlushing = false; lock.Notify(); continue; } if (mDraining && !sample && !waitingEOF) { draining = true; } // We're not stopping or draining, so try to get a sample if (!mQueue.empty()) { sample = mQueue.front(); } } if (draining && !waitingEOF) { MOZ_ASSERT(!sample, "Shouldn't have a sample when pushing EOF frame"); int32_t inputIndex; res = mDecoder->DequeueInputBuffer(DECODER_TIMEOUT, &inputIndex); HANDLE_DECODER_ERROR(); if (inputIndex >= 0) { res = mDecoder->QueueInputBuffer(inputIndex, 0, 0, 0, MediaCodec::BUFFER_FLAG_END_OF_STREAM); HANDLE_DECODER_ERROR(); waitingEOF = true; } } if (sample) { // We have a sample, try to feed it to the decoder int inputIndex; res = mDecoder->DequeueInputBuffer(DECODER_TIMEOUT, &inputIndex); HANDLE_DECODER_ERROR(); if (inputIndex >= 0) { jni::Object::LocalRef buffer(frame.GetEnv()); res = GetInputBuffer(frame.GetEnv(), inputIndex, &buffer); HANDLE_DECODER_ERROR(); void* directBuffer = frame.GetEnv()->GetDirectBufferAddress(buffer.Get()); MOZ_ASSERT(frame.GetEnv()->GetDirectBufferCapacity(buffer.Get()) >= sample->Size(), "Decoder buffer is not large enough for sample"); { // We're feeding this to the decoder, so remove it from the queue MonitorAutoLock lock(mMonitor); mQueue.pop(); } PodCopy((uint8_t*)directBuffer, sample->Data(), sample->Size()); res = mDecoder->QueueInputBuffer(inputIndex, 0, sample->Size(), sample->mTime, 0); HANDLE_DECODER_ERROR(); mDurations.push(media::TimeUnit::FromMicroseconds(sample->mDuration)); sample = nullptr; outputDone = false; } } if (!outputDone) { BufferInfo::LocalRef bufferInfo; res = BufferInfo::New(&bufferInfo); HANDLE_DECODER_ERROR(); int32_t outputStatus; res = mDecoder->DequeueOutputBuffer(bufferInfo, DECODER_TIMEOUT, &outputStatus); HANDLE_DECODER_ERROR(); if (outputStatus == MediaCodec::INFO_TRY_AGAIN_LATER) { // We might want to call mCallback->InputExhausted() here, but there seems to be // some possible bad interactions here with the threading } else if (outputStatus == MediaCodec::INFO_OUTPUT_BUFFERS_CHANGED) { res = ResetOutputBuffers(); HANDLE_DECODER_ERROR(); } else if (outputStatus == MediaCodec::INFO_OUTPUT_FORMAT_CHANGED) { res = mDecoder->GetOutputFormat(ReturnTo(&outputFormat)); HANDLE_DECODER_ERROR(); } else if (outputStatus < 0) { NS_WARNING("unknown error from decoder!"); ENVOKE_CALLBACK(Error); // Don't break here just in case it's recoverable. If it's not, others stuff will fail later and // we'll bail out. } else { int32_t flags; res = bufferInfo->Flags(&flags); HANDLE_DECODER_ERROR(); // We have a valid buffer index >= 0 here if (flags & MediaCodec::BUFFER_FLAG_END_OF_STREAM) { if (draining) { draining = false; waitingEOF = false; mMonitor.Lock(); mDraining = false; mMonitor.Notify(); mMonitor.Unlock(); ENVOKE_CALLBACK(DrainComplete); } mDecoder->ReleaseOutputBuffer(outputStatus, false); outputDone = true; // We only queue empty EOF frames, so we're done for now continue; } MOZ_ASSERT(!mDurations.empty(), "Should have had a duration queued"); media::TimeUnit duration; if (!mDurations.empty()) { duration = mDurations.front(); mDurations.pop(); } auto buffer = jni::Object::LocalRef::Adopt( frame.GetEnv()->GetObjectArrayElement(mOutputBuffers.Get(), outputStatus)); if (buffer) { // The buffer will be null on Android L if we are decoding to a Surface void* directBuffer = frame.GetEnv()->GetDirectBufferAddress(buffer.Get()); Output(bufferInfo, directBuffer, outputFormat, duration); } // The Surface will be updated at this point (for video) mDecoder->ReleaseOutputBuffer(outputStatus, true); PostOutput(bufferInfo, outputFormat, duration); } } } Cleanup(); // We're done MonitorAutoLock lock(mMonitor); mStopping = false; mMonitor.Notify(); }
void MediaEngineWebRTC::EnumerateAudioDevices(nsTArray<nsRefPtr<MediaEngineAudioSource> >* aASources) { webrtc::VoEBase* ptrVoEBase = nullptr; webrtc::VoEHardware* ptrVoEHw = nullptr; // We spawn threads to handle gUM runnables, so we must protect the member vars MutexAutoLock lock(mMutex); #ifdef MOZ_WIDGET_ANDROID jobject context = mozilla::AndroidBridge::Bridge()->GetGlobalContextRef(); // get the JVM JavaVM *jvm = mozilla::AndroidBridge::Bridge()->GetVM(); JNIEnv *env = GetJNIForThread(); if (webrtc::VoiceEngine::SetAndroidObjects(jvm, env, (void*)context) != 0) { LOG(("VoiceEngine:SetAndroidObjects Failed")); return; } #endif if (!mVoiceEngine) { mVoiceEngine = webrtc::VoiceEngine::Create(); if (!mVoiceEngine) { return; } } PRLogModuleInfo *logs = GetWebRTCLogInfo(); if (!gWebrtcTraceLoggingOn && logs && logs->level > 0) { // no need to a critical section or lock here gWebrtcTraceLoggingOn = 1; const char *file = PR_GetEnv("WEBRTC_TRACE_FILE"); if (!file) { file = "WebRTC.log"; } LOG(("Logging webrtc to %s level %d", __FUNCTION__, file, logs->level)); mVoiceEngine->SetTraceFilter(logs->level); mVoiceEngine->SetTraceFile(file); } ptrVoEBase = webrtc::VoEBase::GetInterface(mVoiceEngine); if (!ptrVoEBase) { return; } if (!mAudioEngineInit) { if (ptrVoEBase->Init() < 0) { return; } mAudioEngineInit = true; } ptrVoEHw = webrtc::VoEHardware::GetInterface(mVoiceEngine); if (!ptrVoEHw) { return; } int nDevices = 0; ptrVoEHw->GetNumOfRecordingDevices(nDevices); for (int i = 0; i < nDevices; i++) { // We use constants here because GetRecordingDeviceName takes char[128]. char deviceName[128]; char uniqueId[128]; // paranoia; jingle doesn't bother with this deviceName[0] = '\0'; uniqueId[0] = '\0'; int error = ptrVoEHw->GetRecordingDeviceName(i, deviceName, uniqueId); if (error) { LOG((" VoEHardware:GetRecordingDeviceName: Failed %d", ptrVoEBase->LastError() )); continue; } if (uniqueId[0] == '\0') { // Mac and Linux don't set uniqueId! MOZ_ASSERT(sizeof(deviceName) == sizeof(uniqueId)); // total paranoia strcpy(uniqueId,deviceName); // safe given assert and initialization/error-check } nsRefPtr<MediaEngineWebRTCAudioSource> aSource; NS_ConvertUTF8toUTF16 uuid(uniqueId); if (mAudioSources.Get(uuid, getter_AddRefs(aSource))) { // We've already seen this device, just append. aASources->AppendElement(aSource.get()); } else { aSource = new MediaEngineWebRTCAudioSource( mVoiceEngine, i, deviceName, uniqueId ); mAudioSources.Put(uuid, aSource); // Hashtable takes ownership. aASources->AppendElement(aSource); } } ptrVoEHw->Release(); ptrVoEBase->Release(); }
void MediaEngineWebRTC::EnumerateAudioDevices(MediaSourceType aMediaSource, nsTArray<nsRefPtr<MediaEngineAudioSource> >* aASources) { ScopedCustomReleasePtr<webrtc::VoEBase> ptrVoEBase; ScopedCustomReleasePtr<webrtc::VoEHardware> ptrVoEHw; // We spawn threads to handle gUM runnables, so we must protect the member vars MutexAutoLock lock(mMutex); #ifdef MOZ_WIDGET_ANDROID jobject context = mozilla::AndroidBridge::Bridge()->GetGlobalContextRef(); // get the JVM JavaVM *jvm = mozilla::AndroidBridge::Bridge()->GetVM(); JNIEnv *env = GetJNIForThread(); if (webrtc::VoiceEngine::SetAndroidObjects(jvm, env, (void*)context) != 0) { LOG(("VoiceEngine:SetAndroidObjects Failed")); return; } #endif if (!mVoiceEngine) { mVoiceEngine = webrtc::VoiceEngine::Create(); if (!mVoiceEngine) { return; } } ptrVoEBase = webrtc::VoEBase::GetInterface(mVoiceEngine); if (!ptrVoEBase) { return; } if (!mAudioEngineInit) { if (ptrVoEBase->Init() < 0) { return; } mAudioEngineInit = true; } ptrVoEHw = webrtc::VoEHardware::GetInterface(mVoiceEngine); if (!ptrVoEHw) { return; } int nDevices = 0; ptrVoEHw->GetNumOfRecordingDevices(nDevices); int i; #if defined(MOZ_WIDGET_ANDROID) || defined(MOZ_WIDGET_GONK) i = 0; // Bug 1037025 - let the OS handle defaulting for now on android/b2g #else // -1 is "default communications device" depending on OS in webrtc.org code i = -1; #endif for (; i < nDevices; i++) { // We use constants here because GetRecordingDeviceName takes char[128]. char deviceName[128]; char uniqueId[128]; // paranoia; jingle doesn't bother with this deviceName[0] = '\0'; uniqueId[0] = '\0'; int error = ptrVoEHw->GetRecordingDeviceName(i, deviceName, uniqueId); if (error) { LOG((" VoEHardware:GetRecordingDeviceName: Failed %d", ptrVoEBase->LastError() )); continue; } if (uniqueId[0] == '\0') { // Mac and Linux don't set uniqueId! MOZ_ASSERT(sizeof(deviceName) == sizeof(uniqueId)); // total paranoia strcpy(uniqueId,deviceName); // safe given assert and initialization/error-check } nsRefPtr<MediaEngineWebRTCAudioSource> aSource; NS_ConvertUTF8toUTF16 uuid(uniqueId); if (mAudioSources.Get(uuid, getter_AddRefs(aSource))) { // We've already seen this device, just append. aASources->AppendElement(aSource.get()); } else { aSource = new MediaEngineWebRTCAudioSource( mThread, mVoiceEngine, i, deviceName, uniqueId ); mAudioSources.Put(uuid, aSource); // Hashtable takes ownership. aASources->AppendElement(aSource); } } }