PathCompare::PathCompare(ROSManager *ros_mngr ,QWidget * tab_widget) : ComperatorPlugin(), form(new Ui::Form), ros_mngr(ros_mngr), topic_type_str("nav_msgs/Path"), tpm_list(), table_model(new GraphTableModel(tpm_list)) { form->setupUi(tab_widget); connect(form->ReferencePathSelection, SIGNAL(currentIndexChanged(QString)), this, SLOT(topicSelected(QString))); updateTopics(); //connect to table model update function connect(this, SIGNAL(tpmListChanged(QList<TopicPathManagerPtr>)), table_model.get(), SLOT(updataTPMList(QList<TopicPathManagerPtr>))); //connect to ros_mngr topic update tick connect(ros_mngr, SIGNAL(updateModel()), this, SLOT(updateTopics())); form->PathInformationTable->setModel(table_model.get()); // form->PathInformationTable->setModel(new GraphTableModel(tpm_list)); //connect export button with csv write slot connect(form->exportButton, SIGNAL(clicked()), this, SLOT(writeCurrentData())); }
StereoImageDisplayBase::StereoImageDisplayBase() : Display() , left_sub_() , right_sub_() , left_tf_filter_() , right_tf_filter_() , messages_received_(0) , use_approx_sync_(false) { left_topic_property_ = new RosTopicProperty("Left Image Topic", "", QString::fromStdString(ros::message_traits::datatype<sensor_msgs::Image>()), "sensor_msgs::Image topic to subscribe to.", this, SLOT( updateTopics() )); right_topic_property_ = new RosTopicProperty("Right Image Topic", "", QString::fromStdString(ros::message_traits::datatype<sensor_msgs::Image>()), "sensor_msgs::Image topic to subscribe to.", this, SLOT( updateTopics() )); transport_property_ = new EnumProperty( "Transport Hint", "raw", "Preferred method of sending images.", this, SLOT( updateTopics() )); bool ok = connect(transport_property_, SIGNAL( requestOptions( EnumProperty* )), this, SLOT( fillTransportOptionList( EnumProperty* ))); Q_ASSERT(ok); queue_size_property_ = new IntProperty( "Queue Size", 2, "Advanced: set the size of the incoming message queue. Increasing this " "is useful if your incoming TF data is delayed significantly from your" " image data, but it can greatly increase memory usage if the messages are big.", this, SLOT( updateQueueSize() )); queue_size_property_->setMin( 1 ); approx_sync_property_ = new BoolProperty( "Approximate Sync", false, "Advanced: set this to true if your timestamps aren't synchronized.", this, SLOT( updateApproxSync() )); transport_property_->setStdString("raw"); }
topicCorpus(corpus* corp, // The corpus int K, // The number of latent factors double latentReg, // Parameter regularizer used by the "standard" recommender system double lambda) : // Word regularizer used by HFT corp(corp), K(K), latentReg(latentReg), lambda(lambda) { srand(0); nUsers = corp->nUsers; nBeers = corp->nBeers; nWords = corp->nWords; votesPerUser = new std::vector<vote*>[nUsers]; votesPerBeer = new std::vector<vote*>[nBeers]; trainVotesPerUser = new std::vector<vote*>[nUsers]; trainVotesPerBeer = new std::vector<vote*>[nBeers]; for (std::vector<vote*>::iterator it = corp->TR_V->begin(); it != corp->TR_V->end(); it++) { vote* vi = *it; votesPerUser[vi->user].push_back(vi); } for (int user = 0; user < nUsers; user++) for (std::vector<vote*>::iterator it = votesPerUser[user].begin(); it != votesPerUser[user].end(); it++) { vote* vi = *it; votesPerBeer[vi->item].push_back(vi); } for (std::vector<vote*>::iterator it = corp->TR_V->begin(); it != corp->TR_V->end(); it ++) { trainVotes.push_back(*it); trainVotesPerUser[(*it)->user].push_back(*it); trainVotesPerBeer[(*it)->item].push_back(*it); if (nTrainingPerUser.find((*it)->user) == nTrainingPerUser.end()) nTrainingPerUser[(*it)->user] = 0; if (nTrainingPerBeer.find((*it)->item) == nTrainingPerBeer.end()) nTrainingPerBeer[(*it)->item] = 0; nTrainingPerUser[(*it)->user] ++; nTrainingPerBeer[(*it)->item] ++; } for (std::vector<vote*>::iterator it = corp->TE_V->begin(); it != corp->TE_V->end(); it ++) { testVotes.insert(*it); } for (std::vector<vote*>::iterator it = corp->VA_V->begin(); it != corp->VA_V->end(); it ++) { validVotes.push_back(*it); } std::vector<vote*> remove; for (std::set<vote*>::iterator it = testVotes.begin(); it != testVotes.end(); it ++) { if (nTrainingPerUser.find((*it)->user) == nTrainingPerUser.end()) remove.push_back(*it); else if (nTrainingPerBeer.find((*it)->item) == nTrainingPerBeer.end()) remove.push_back(*it); } for (std::vector<vote*>::iterator it = remove.begin(); it != remove.end(); it ++) { // Uncomment the line below to ignore (at testing time) users/items that don't appear in the training set // testVotes.erase(*it); } // total number of parameters NW = 1 + 1 + (K + 1) * (nUsers + nBeers) + K * nWords; // Initialize parameters and latent variables // Zero all weights W = new double [NW]; for (int i = 0; i < NW; i++) W[i] = 0; getG(W, &alpha, &kappa, &beta_user, &beta_beer, &gamma_user, &gamma_beer, &topicWords, true); // Set alpha to the average for (std::vector<vote*>::iterator vi = trainVotes.begin(); vi != trainVotes.end(); vi++) { *alpha += (*vi)->value; } *alpha /= trainVotes.size(); double train, valid, test, testSte; validTestError(train, valid, test, testSte); printf("Error w/ offset term only (train/valid/test) = %f/%f/%f (%f)\n", train, valid, test, testSte); // Set beta to user and product offsets for (std::vector<vote*>::iterator vi = trainVotes.begin(); vi != trainVotes.end(); vi++) { vote* v = *vi; beta_user[v->user] += v->value - *alpha; beta_beer[v->item] += v->value - *alpha; } for (int u = 0; u < nUsers; u++) beta_user[u] /= trainVotesPerUser[u].size(); //beta_user[u] /= votesPerUser[u].size(); for (int b = 0; b < nBeers; b++) beta_beer[b] /= trainVotesPerBeer[b].size(); //beta_beer[b] /= votesPerBeer[b].size(); validTestError(train, valid, test, testSte); printf("Error w/ offset and bias (train/valid/test) = %f/%f/%f (%f)\n", train, valid, test, testSte); // Actually the model works better if we initialize none of these terms if (lambda > 0) { *alpha = 0; for (int u = 0; u < nUsers; u++) beta_user[u] = 0; for (int b = 0; b < nBeers; b++) beta_beer[b] = 0; } wordTopicCounts = new int*[nWords]; for (int w = 0; w < nWords; w++) { wordTopicCounts[w] = new int[K]; for (int k = 0; k < K; k++) wordTopicCounts[w][k] = 0; } // Generate random topic assignments topicCounts = new long long[K]; for (int k = 0; k < K; k++) topicCounts[k] = 0; beerTopicCounts = new int*[nBeers]; beerWords = new int[nBeers]; for (int b = 0; b < nBeers; b ++) { beerTopicCounts[b] = new int[K]; for (int k = 0; k < K; k ++) beerTopicCounts[b][k] = 0; beerWords[b] = 0; } for (std::vector<vote*>::iterator vi = trainVotes.begin(); vi != trainVotes.end(); vi++) { vote* v = *vi; wordTopics[v] = new int[v->words.size()]; beerWords[(*vi)->item] += v->words.size(); for (int wp = 0; wp < (int) v->words.size(); wp++) { int wi = v->words[wp]; int t = rand() % K; wordTopics[v][wp] = t; beerTopicCounts[(*vi)->item][t]++; wordTopicCounts[wi][t]++; topicCounts[t]++; } } // Initialize the background word frequency totalWords = 0; backgroundWords = new double[nWords]; for (int w = 0; w < nWords; w ++) backgroundWords[w] = 0; for (std::vector<vote*>::iterator vi = trainVotes.begin(); vi != trainVotes.end(); vi++) { for (std::vector<int>::iterator it = (*vi)->words.begin(); it != (*vi)->words.end(); it++) { totalWords++; backgroundWords[*it]++; } } for (int w = 0; w < nWords; w++) backgroundWords[w] /= totalWords; if (lambda == 0) { for (int u = 0; u < nUsers; u++) { if (nTrainingPerUser.find(u) == nTrainingPerUser.end()) continue; for (int k = 0; k < K; k++) gamma_user[u][k] = rand() * 1.0 / RAND_MAX; } for (int b = 0; b < nBeers; b++) { if (nTrainingPerBeer.find(b) == nTrainingPerBeer.end()) continue; for (int k = 0; k < K; k++) gamma_beer[b][k] = rand() * 1.0 / RAND_MAX; } } else { for (int w = 0; w < nWords; w++) for (int k = 0; k < K; k++) topicWords[w][k] = 0; } normalizeWordWeights(); if (lambda > 0) updateTopics(true); *kappa = 1.0; }