bool dbEval(const char *ns, BSONObj& cmd, BSONObjBuilder& result, string& errmsg) { BSONElement e = cmd.firstElement(); uassert( "eval needs Code" , e.type() == Code || e.type() == CodeWScope || e.type() == String ); const char *code = 0; switch ( e.type() ) { case String: case Code: code = e.valuestr(); break; case CodeWScope: code = e.codeWScopeCode(); break; default: assert(0); } assert( code ); if ( ! globalScriptEngine ) { errmsg = "db side execution is disabled"; return false; } auto_ptr<Scope> s = globalScriptEngine->getPooledScope( ns ); ScriptingFunction f = s->createFunction(code); if ( f == 0 ) { errmsg = (string)"compile failed: " + s->getError(); return false; } if ( e.type() == CodeWScope ) s->init( e.codeWScopeScopeData() ); s->localConnect( database->name.c_str() ); BSONObj args; { BSONElement argsElement = cmd.findElement("args"); if ( argsElement.type() == Array ) { args = argsElement.embeddedObject(); if ( edebug ) { out() << "args:" << args.toString() << endl; out() << "code:\n" << code << endl; } } } int res; { Timer t; res = s->invoke(f,args, 10 * 60 * 1000); int m = t.millis(); if ( m > 100 ) { out() << "dbeval slow, time: " << dec << m << "ms " << ns << endl; if ( m >= 1000 ) log() << code << endl; else OCCASIONALLY log() << code << endl; } } if ( res ) { result.append("errno", (double) res); errmsg = "invoke failed: "; errmsg += s->getError(); return false; } s->append( result , "retval" , "return" ); return true; }
bool Cloner::go(const char *masterHost, string& errmsg, const string& fromdb, bool logForRepl, bool slaveOk, bool useReplAuth, bool snapshot) { massert( "useReplAuth is not written to replication log", !useReplAuth || !logForRepl ); string todb = cc().database()->name; stringstream a,b; a << "localhost:" << cmdLine.port; b << "127.0.0.1:" << cmdLine.port; bool masterSameProcess = ( a.str() == masterHost || b.str() == masterHost ); if ( masterSameProcess ) { if ( fromdb == todb && cc().database()->path == dbpath ) { // guard against an "infinite" loop /* if you are replicating, the local.sources config may be wrong if you get this */ errmsg = "can't clone from self (localhost)."; return false; } } /* todo: we can put these releases inside dbclient or a dbclient specialization. or just wait until we get rid of global lock anyway. */ string ns = fromdb + ".system.namespaces"; list<BSONObj> toClone; { dbtemprelease r; auto_ptr<DBClientCursor> c; { if ( !masterSameProcess ) { auto_ptr< DBClientConnection > c( new DBClientConnection() ); if ( !c->connect( masterHost, errmsg ) ) return false; if( !replAuthenticate(c.get()) ) return false; conn = c; } else { conn.reset( new DBDirectClient() ); } c = conn->query( ns.c_str(), BSONObj(), 0, 0, 0, slaveOk ? Option_SlaveOk : 0 ); } if ( c.get() == 0 ) { errmsg = "query failed " + ns; return false; } while ( c->more() ){ BSONObj collection = c->next(); log(2) << "\t cloner got " << collection << endl; BSONElement e = collection.findElement("name"); if ( e.eoo() ) { string s = "bad system.namespaces object " + collection.toString(); massert(s.c_str(), false); } assert( !e.eoo() ); assert( e.type() == String ); const char *from_name = e.valuestr(); if( strstr(from_name, ".system.") ) { /* system.users is cloned -- but nothing else from system. */ if( legalClientSystemNS( from_name , true ) == 0 ){ log(2) << "\t\t not cloning because system collection" << endl; continue; } } else if( strchr(from_name, '$') ) { // don't clone index namespaces -- we take care of those separately below. log(2) << "\t\t not cloning because has $ " << endl; continue; } toClone.push_back( collection.getOwned() ); } } for ( list<BSONObj>::iterator i=toClone.begin(); i != toClone.end(); i++ ){ { dbtemprelease r; } BSONObj collection = *i; log(2) << " really will clone: " << collection << endl; const char * from_name = collection["name"].valuestr(); BSONObj options = collection.getObjectField("options"); /* change name "<fromdb>.collection" -> <todb>.collection */ const char *p = strchr(from_name, '.'); assert(p); string to_name = todb + p; { string err; const char *toname = to_name.c_str(); userCreateNS(toname, options, err, logForRepl); } log(1) << "\t\t cloning " << from_name << " -> " << to_name << endl; Query q; if( snapshot ) q.snapshot(); copy(from_name, to_name.c_str(), false, logForRepl, masterSameProcess, slaveOk, q); } // now build the indexes string system_indexes_from = fromdb + ".system.indexes"; string system_indexes_to = todb + ".system.indexes"; /* [dm]: is the ID index sometimes not called "_id_"? There is other code in the system that looks for a "_id" prefix rather than this exact value. we should standardize. OR, remove names - which is in the bugdb. Anyway, this is dubious here at the moment. */ copy(system_indexes_from.c_str(), system_indexes_to.c_str(), true, logForRepl, masterSameProcess, slaveOk, BSON( "name" << NE << "_id_" ) ); return true; }
bool Cloner::go(const char *masterHost, string& errmsg, const string& fromdb, bool logForRepl, bool slaveOk, bool useReplAuth) { massert( "useReplAuth is not written to replication log", !useReplAuth || !logForRepl ); string todb = database->name; stringstream a,b; a << "localhost:" << port; b << "127.0.0.1:" << port; bool masterSameProcess = ( a.str() == masterHost || b.str() == masterHost ); if ( masterSameProcess ) { if ( fromdb == todb && database->path == dbpath ) { // guard against an "infinite" loop /* if you are replicating, the local.sources config may be wrong if you get this */ errmsg = "can't clone from self (localhost)."; return false; } } /* todo: we can put thesee releases inside dbclient or a dbclient specialization. or just wait until we get rid of global lock anyway. */ string ns = fromdb + ".system.namespaces"; auto_ptr<DBClientCursor> c; { dbtemprelease r; if ( !masterSameProcess ) { auto_ptr< DBClientConnection > c( new DBClientConnection() ); if ( !c->connect( masterHost, errmsg ) ) return false; if( !replAuthenticate(c.get()) ) return false; conn = c; } else { conn.reset( new DBDirectClient() ); } c = conn->query( ns.c_str(), BSONObj(), 0, 0, 0, slaveOk ? Option_SlaveOk : 0 ); } if ( c.get() == 0 ) { errmsg = "query failed " + ns; return false; } while ( 1 ) { { dbtemprelease r; if ( !c->more() ) break; } BSONObj collection = c->next(); BSONElement e = collection.findElement("name"); if ( e.eoo() ) { string s = "bad system.namespaces object " + collection.toString(); /* temp out() << masterHost << endl; out() << ns << endl; out() << e.toString() << endl; exit(1);*/ massert(s.c_str(), false); } assert( !e.eoo() ); assert( e.type() == String ); const char *from_name = e.valuestr(); if( strstr(from_name, ".system.") ) { /* system.users is cloned -- but nothing else from system. */ if( strstr(from_name, ".system.users") == 0 ) continue; } else if( strchr(from_name, '$') ) { // don't clone index namespaces -- we take care of those separately below. continue; } BSONObj options = collection.getObjectField("options"); /* change name "<fromdb>.collection" -> <todb>.collection */ const char *p = strchr(from_name, '.'); assert(p); string to_name = todb + p; //if( !options.isEmpty() ) { string err; const char *toname = to_name.c_str(); userCreateNS(toname, options, err, logForRepl); /* chunks are big enough that we should create the _id index up front, that should be faster. perhaps we should do that for everything? Not doing that yet -- not sure how we want to handle _id-less collections, and we might not want to create the index there. */ if ( strstr(toname, "._chunks") ) ensureHaveIdIndex(toname); } copy(from_name, to_name.c_str(), false, logForRepl, masterSameProcess, slaveOk); } // now build the indexes string system_indexes_from = fromdb + ".system.indexes"; string system_indexes_to = todb + ".system.indexes"; copy(system_indexes_from.c_str(), system_indexes_to.c_str(), true, logForRepl, masterSameProcess, slaveOk, BSON( "name" << NE << "_id_" ) ); return true; }