MetadataNode DeltaKernel::dumpDetail(PointViewPtr& srcView, PointViewPtr& candView, KD3Index& index, DimIndexMap& dims) { MetadataNode root; for (PointId id = 0; id < srcView->size(); ++id) { double x = srcView->getFieldAs<double>(Dimension::Id::X, id); double y = srcView->getFieldAs<double>(Dimension::Id::Y, id); double z = srcView->getFieldAs<double>(Dimension::Id::Z, id); PointId candId = index.neighbor(x, y, z); MetadataNode delta = root.add("delta"); delta.add("i", id); for (auto di = dims.begin(); di != dims.end(); ++di) { DimIndex& d = di->second; double sv = srcView->getFieldAs<double>(d.m_srcId, id); double cv = candView->getFieldAs<double>(d.m_candId, candId); delta.add(d.m_name, sv - cv); } } return root; }
MetadataNode InfoKernel::dumpSummary(const QuickInfo& qi) { MetadataNode summary; summary.add("num_points", qi.m_pointCount); summary.add("spatial_reference", qi.m_srs.getWKT()); MetadataNode bounds = summary.add("bounds"); MetadataNode x = bounds.add("X"); x.add("min", qi.m_bounds.minx); x.add("max", qi.m_bounds.maxx); MetadataNode y = bounds.add("Y"); y.add("min", qi.m_bounds.miny); y.add("max", qi.m_bounds.maxy); MetadataNode z = bounds.add("Z"); z.add("min", qi.m_bounds.minz); z.add("max", qi.m_bounds.maxz); std::string dims; auto di = qi.m_dimNames.begin(); while (di != qi.m_dimNames.end()) { dims += *di; ++di; if (di != qi.m_dimNames.end()) dims += ", "; } summary.add("dimensions", dims); return summary; }
PointViewPtr IcpFilter::icp(PointViewPtr fixed, PointViewPtr moving) const { typedef pcl::PointXYZ Point; typedef pcl::PointCloud<Point> Cloud; Cloud::Ptr fixedCloud(new Cloud()); pclsupport::PDALtoPCD(fixed, *fixedCloud); Cloud::Ptr movingCloud(new Cloud()); pclsupport::PDALtoPCD(moving, *movingCloud); pcl::IterativeClosestPoint<Point, Point> icp; icp.setInputSource(movingCloud); icp.setInputTarget(fixedCloud); Cloud result; icp.align(result); MetadataNode root = getMetadata(); // I couldn't figure out the template-fu to get // `MetadataNodeImpl::setValue` to work for all Eigen matrices with one // function, so I'm just brute-forcing the cast for now. root.add("transform", Eigen::MatrixXd(icp.getFinalTransformation().cast<double>())); root.add("converged", icp.hasConverged()); root.add("fitness", icp.getFitnessScore()); assert(moving->size() == result.points.size()); for (PointId i = 0; i < moving->size(); ++i) { moving->setField(Dimension::Id::X, i, result.points[i].x); moving->setField(Dimension::Id::Y, i, result.points[i].y); moving->setField(Dimension::Id::Z, i, result.points[i].z); } return moving; }
TEST(XMLSchemaTest, copy) { using namespace pdal; std::string xml = ReadXML(TestConfig::dataPath() + "../../schemas/16-dim-schema.xml"); std::string xsd = ReadXML(TestConfig::dataPath() + "../../schemas/LAS.xsd"); XMLSchema s1(xml, xsd); PointTable table; XMLDimList dims = s1.xmlDims(); for (auto di = dims.begin(); di != dims.end(); ++di) { Dimension::Id id = table.layout()->registerOrAssignDim( di->m_name, di->m_dimType.m_type); s1.setId(di->m_name, id); } MetadataNode m; MetadataNode m1 = m.add("m1", 1u); MetadataNode m2 = m.add("m2", 1); MetadataNode m1prime = m.add("m1prime", "Some other metadata"); m1.add("uuid", Uuid()); XMLSchema s2(s1.xmlDims(), m); std::string xml_output = s2.xml(); XMLSchema s3(xml_output, xsd); XMLDimList dims3 = s3.xmlDims(); EXPECT_EQ(dims.size(), dims3.size()); auto di1 = dims.begin(); auto di3 = dims3.begin(); while (di1 != dims.end() && di3 != dims3.end()) { XMLDim& dim1 = *di1; XMLDim& dim3 = *di3; EXPECT_EQ(dim1.m_name, dim3.m_name); EXPECT_EQ(dim1.m_dimType.m_type, dim3.m_dimType.m_type); di1++; di3++; } }
void StatsFilter::extractMetadata(PointTableRef table) { uint32_t position(0); for (auto di = m_stats.begin(); di != m_stats.end(); ++di) { const Summary& s = di->second; MetadataNode t = m_metadata.addList("statistic"); t.add("position", position++); s.extractMetadata(t); } // If we have X, Y, & Z dims, output bboxes auto xs = m_stats.find(Dimension::Id::X); auto ys = m_stats.find(Dimension::Id::Y); auto zs = m_stats.find(Dimension::Id::Z); if (xs != m_stats.end() && ys != m_stats.end() && zs != m_stats.end()) { BOX3D box(xs->second.minimum(), ys->second.minimum(), zs->second.minimum(), xs->second.maximum(), ys->second.maximum(), zs->second.maximum()); pdal::Polygon p(box); MetadataNode mbox = Utils::toMetadata(box); MetadataNode box_metadata = m_metadata.add("bbox"); MetadataNode metadata = box_metadata.add("native"); MetadataNode boundary = metadata.add("boundary", p.json()); MetadataNode bbox = metadata.add(mbox); SpatialReference ref = table.anySpatialReference(); // if we don't get an SRS from the PointTableRef, // we won't add another metadata node if (!ref.empty()) { p.setSpatialReference(ref); SpatialReference epsg4326("EPSG:4326"); pdal::Polygon pdd = p.transform(epsg4326); BOX3D ddbox = pdd.bounds(); MetadataNode epsg_4326_box = Utils::toMetadata(ddbox); MetadataNode dddbox = box_metadata.add("EPSG:4326"); dddbox.add(epsg_4326_box); MetadataNode ddboundary = dddbox.add("boundary", pdd.json()); } } }
void Stage::setSpatialReference(MetadataNode& m, const SpatialReference& spatialRef) { m_spatialReference = spatialRef; auto pred = [](MetadataNode m){ return m.name() == "spatialreference"; }; MetadataNode spatialNode = m.findChild(pred); if (spatialNode.empty()) { m.add(spatialRef.toMetadata()); m.add("spatialreference", spatialRef.getWKT(), "SRS of this stage"); m.add("comp_spatialreference", spatialRef.getWKT(), "SRS of this stage"); } }
MetadataNode PointView::toMetadata() const { MetadataNode node; const Dimension::IdList& dims = layout()->dims(); for (PointId idx = 0; idx < size(); idx++) { MetadataNode pointnode = node.add(std::to_string(idx)); for (auto di = dims.begin(); di != dims.end(); ++di) { double v = getFieldAs<double>(*di, idx); pointnode.add(layout()->dimName(*di), v); } } return node; }
inline MetadataNode toMetadata(const PointViewPtr view) { MetadataNode node; const Dimension::IdList& dims = view->dims(); for (PointId idx = 0; idx < view->size(); idx++) { MetadataNode pointnode = node.add(std::to_string(idx)); for (auto di = dims.begin(); di != dims.end(); ++di) { double v = view->getFieldAs<double>(*di, idx); pointnode.add(Dimension::name(*di), v); } } return node; }
void InfoKernel::dump(MetadataNode& root) { if (m_showSchema) root.add(m_manager->pointTable().toMetadata().clone("schema")); if (m_PointCloudSchemaOutput.size() > 0) { #ifdef PDAL_HAVE_LIBXML2 XMLSchema schema(m_manager->pointTable().layout()); std::ostream *out = FileUtils::createFile(m_PointCloudSchemaOutput); std::string xml(schema.xml()); out->write(xml.c_str(), xml.size()); FileUtils::closeFile(out); #else std::cerr << "libxml2 support not enabled, no schema is produced" << std::endl; #endif } if (m_showStats) root.add(m_statsStage->getMetadata().clone("stats")); if (m_pipelineFile.size() > 0) PipelineWriter::writePipeline(m_manager->getStage(), m_pipelineFile); if (m_pointIndexes.size()) { PointViewSet viewSet = m_manager->views(); assert(viewSet.size() == 1); root.add(dumpPoints(*viewSet.begin()).clone("points")); } if (m_queryPoint.size()) { PointViewSet viewSet = m_manager->views(); assert(viewSet.size() == 1); root.add(dumpQuery(*viewSet.begin())); } if (m_showMetadata) { // If we have a reader cached, this means we // weren't reading a pipeline file directly. In that // case, use the metadata from the reader (old behavior). // Otherwise, return the full metadata of the entire pipeline if (m_reader) root.add(m_reader->getMetadata().clone("metadata")); else root.add(m_manager->getMetadata().clone("metadata")); } if (m_boundary) { PointViewSet viewSet = m_manager->views(); assert(viewSet.size() == 1); root.add(m_hexbinStage->getMetadata().clone("boundary")); } }
void Stage::setSpatialReference(MetadataNode& m, const SpatialReference& spatialRef) { m_spatialReference = spatialRef; auto pred = [](MetadataNode m){ return m.name() == "spatialreference"; }; MetadataNode spatialNode = m.findChild(pred); if (spatialNode.empty()) { m.add(Utils::toMetadata(spatialRef)); m.add("spatialreference", spatialRef.getWKT(SpatialReference::eHorizontalOnly, false), "SRS of this stage"); m.add("comp_spatialreference", spatialRef.getWKT(SpatialReference::eCompoundOK, false), "SRS of this stage"); } }
int HausdorffKernel::execute() { PointTable srcTable; PointViewPtr srcView = loadSet(m_sourceFile, srcTable); PointTable candTable; PointViewPtr candView = loadSet(m_candidateFile, candTable); double hausdorff = Utils::computeHausdorff(srcView, candView); MetadataNode root; root.add("filenames", m_sourceFile); root.add("filenames", m_candidateFile); root.add("hausdorff", hausdorff); root.add("pdal_version", pdal::GetFullVersionString()); Utils::toJSON(root, std::cout); return 0; }
static void add( MetadataNode& parent, const std::string& name, const std::string& value ) { Json::Value node; Json::Reader reader; if (reader.parse(value, node)) add(parent, name, node); else parent.add(name, value); }
static void add( MetadataNode& parent, const std::string& name, const Json::Value& node ) { if (node.isNull()) { parent.add(name, ""); } else if (node.isBool()) { parent.add(name, node.asBool()); } else if (node.isInt()) { parent.add(name, node.asInt64()); } else if (node.isUInt()) { parent.add(name, node.asUInt64()); } else if (node.isDouble()) { parent.add(name, node.asDouble()); } else if (node.isString()) { parent.add(name, node.asString()); } else if (node.isObject()) { MetadataNode object = parent.add(name); for (const std::string& name: node.getMemberNames()) { add(object, name, node[name]); } } else if (node.isArray()) { for (const Json::Value& item: node) { add(parent, name, item); } } }
MetadataNode DeltaKernel::dump(PointViewPtr& srcView, PointViewPtr& candView, KD3Index& index, DimIndexMap& dims) { MetadataNode root; for (PointId id = 0; id < srcView->size(); ++id) { double x = srcView->getFieldAs<double>(Dimension::Id::X, id); double y = srcView->getFieldAs<double>(Dimension::Id::Y, id); double z = srcView->getFieldAs<double>(Dimension::Id::Z, id); PointId candId = index.neighbor(x, y, z); // It may be faster to put in a special case to avoid having to // fetch X, Y and Z, more than once but this is simpler and // I'm thinking in most cases it will make no practical difference. for (auto di = dims.begin(); di != dims.end(); ++di) { DimIndex& d = di->second; double sv = srcView->getFieldAs<double>(d.m_srcId, id); double cv = candView->getFieldAs<double>(d.m_candId, candId); accumulate(d, sv - cv); } } root.add("source", m_sourceFile); root.add("candidate", m_candidateFile); for (auto dpair : dims) { DimIndex& d = dpair.second; MetadataNode dimNode = root.add(d.m_name); dimNode.add("min", d.m_min); dimNode.add("max", d.m_max); dimNode.add("mean", d.m_avg); } return root; }
MetadataNode InfoKernel::run(const std::string& filename) { MetadataNode root; root.add("filename", filename); if (m_showSummary) { QuickInfo qi = m_reader->preview(); MetadataNode summary = dumpSummary(qi).clone("summary"); root.add(summary); } else { applyExtraStageOptionsRecursive(m_manager->getStage()); if (m_needPoints || m_showMetadata) m_manager->execute(); else m_manager->prepare(); dump(root); } root.add("pdal_version", pdal::GetFullVersionString()); return root; }
MetadataNode InfoKernel::dumpPoints(PointViewPtr inView) const { MetadataNode root; PointViewPtr outView = inView->makeNew(); // Stick points in a inViewfer. std::vector<PointId> points = getListOfPoints(m_pointIndexes); for (size_t i = 0; i < points.size(); ++i) { PointId id = (PointId)points[i]; if (id < inView->size()) outView->appendPoint(*inView.get(), id); } MetadataNode tree = outView->toMetadata(); std::string prefix("point "); for (size_t i = 0; i < outView->size(); ++i) { MetadataNode n = tree.findChild(std::to_string(i)); n.add("PointId", points[i]); root.add(n.clone("point")); } return root; }
int DiffKernel::execute() { PointTable sourceTable; Stage& source = makeReader(m_sourceFile, m_driverOverride); source.prepare(sourceTable); PointViewSet sourceSet = source.execute(sourceTable); MetadataNode errors; PointTable candidateTable; Stage& candidate = makeReader(m_candidateFile, m_driverOverride); candidate.prepare(candidateTable); PointViewSet candidateSet = candidate.execute(candidateTable); assert(sourceSet.size() == 1); assert(candidateSet.size() == 1); PointViewPtr sourceView = *sourceSet.begin(); PointViewPtr candidateView = *candidateSet.begin(); if (candidateView->size() != sourceView->size()) { std::ostringstream oss; oss << "Source and candidate files do not have the same point count"; errors.add("count.error", oss.str()); errors.add("count.candidate", candidateView->size()); errors.add("count.source", sourceView->size()); } MetadataNode source_metadata = sourceTable.metadata(); MetadataNode candidate_metadata = candidateTable.metadata(); if (source_metadata != candidate_metadata) { std::ostringstream oss; oss << "Source and candidate files do not have the same metadata count"; errors.add("metadata.error", oss.str()); errors.add(source_metadata); errors.add(candidate_metadata); } if (candidateTable.layout()->dims().size() != sourceTable.layout()->dims().size()) { std::ostringstream oss; oss << "Source and candidate files do not have the same " "number of dimensions"; } return 0; }
bool XMLSchema::loadMetadata(xmlNode *startNode, MetadataNode& input) { // Expect metadata in the following form // We are going to skip the root element because we are // expecting to be given one with our input // <pc:metadata> // <Metadata name="root" type=""> // <Metadata name="compression" type="string">lazperf</Metadata> // <Metadata name="version" type="string">1.0</Metadata> // </Metadata> // </pc:metadata> xmlNode *node = startNode; for (node = startNode; node; node = node->next) { if (node->type != XML_ELEMENT_NODE) continue; if (std::string((const char*)node->name) == "Metadata") { const char *fieldname = (const char*)xmlGetProp(node, (const xmlChar*)"name"); const char *etype = (const char*)xmlGetProp(node, (const xmlChar*)"type"); const char *description = (const char*)xmlGetProp(node, (const xmlChar*) "description"); const char *text = (const char*)xmlNodeGetContent(node); if (!Utils::iequals(fieldname, "root")) { if (!fieldname) { std::cerr << "Unable to read metadata for node '" << (const char*)node->name << "' no \"name\" was given"; return false; } input.add(fieldname, text ? text : "", description ? description : ""); } } loadMetadata(node->children, input); } return true; }
TEST(LasWriterTest, metadata_options) { Options ops; Option metadataOp("metadata", ""); Options metadataOps; metadataOps.add("format", 4); metadataOps.add("software_id", "MySoftwareId"); metadataOps.add("system_id", "FORWARD"); metadataOps.add("minor_version", "forward"); metadataOp.setOptions(metadataOps); ops.add(metadataOp); ops.add("filename", Support::temppath("wontgetwritten")); LasWriter writer; writer.setOptions(ops); PointTable table; writer.prepare(table); MetadataNode m = writer.getMetadata(); m.add("minor_version", 56); uint8_t format = (uint8_t)LasTester::headerVal<unsigned>(writer, "format"); EXPECT_EQ(format, 4u); std::string softwareId = LasTester::headerVal<std::string>(writer, "software_id"); EXPECT_EQ(softwareId, "MySoftwareId"); std::string systemId = LasTester::headerVal<std::string>(writer, "system_id"); // Since the option specifies forward and there is not associated // metadata, the value should be the default. LasHeader header; EXPECT_EQ(systemId, header.getSystemIdentifier()); // In this case, we should have metadata to override the default. uint8_t minorVersion = (uint8_t)LasTester::headerVal<unsigned>(writer, "minor_version"); EXPECT_EQ(minorVersion, 56u); }
void InfoKernel::dump(MetadataNode& root) { if (m_showSchema) root.add(Utils::toMetadata(m_manager->pointTable()).clone("schema")); if (m_PointCloudSchemaOutput.size() > 0) { #ifdef PDAL_HAVE_LIBXML2 XMLSchema schema(m_manager->pointTable().layout()); std::ostream *out = FileUtils::createFile(m_PointCloudSchemaOutput); std::string xml(schema.xml()); out->write(xml.c_str(), xml.size()); FileUtils::closeFile(out); #else std::cerr << "libxml2 support not enabled, no schema is produced" << std::endl; #endif } if (m_showStats) root.add(m_statsStage->getMetadata().clone("stats")); if (m_pipelineFile.size() > 0) PipelineWriter::writePipeline(m_manager->getStage(), m_pipelineFile); if (m_pointIndexes.size()) { PointViewSet viewSet = m_manager->views(); assert(viewSet.size() == 1); root.add(dumpPoints(*viewSet.begin()).clone("points")); } if (m_queryPoint.size()) { PointViewSet viewSet = m_manager->views(); assert(viewSet.size() == 1); root.add(dumpQuery(*viewSet.begin())); } if (m_showMetadata) root.add(m_reader->getMetadata().clone("metadata")); if (m_boundary) { PointViewSet viewSet = m_manager->views(); assert(viewSet.size() == 1); root.add(m_hexbinStage->getMetadata().clone("boundary")); } }
void DiffKernel::checkPoints(const PointView& source_data, const PointView& candidate_data, MetadataNode errors) { uint32_t MAX_BADBYTES(20); uint32_t badbytes(0); // Both schemas have already been determined to be equal, so are the // same size and in the same order. Dimension::IdList const& sourceDims = source_data.dims(); Dimension::IdList const& candidateDims = candidate_data.dims(); char sbuf[8]; char cbuf[8]; for (PointId idx = 0; idx < source_data.size(); ++idx) { for (size_t d = 0; d < sourceDims.size(); ++d) { Dimension::Id sd = sourceDims[d]; Dimension::Id cd = candidateDims[d]; source_data.getRawField(sd, idx, (void *)sbuf); candidate_data.getRawField(cd, idx, (void *)cbuf); Dimension::Type t = Dimension::defaultType(cd); size_t size = Dimension::size(t); if (memcmp(sbuf, cbuf, size)) { std::ostringstream oss; oss << "Point " << idx << " differs for dimension \"" << Dimension::name(sd) << "\" for source and candidate"; errors.add("data.error", oss.str()); badbytes++; } } if (badbytes > MAX_BADBYTES ) break; } }
void addMetadata(PyObject *list, MetadataNode m) { if (!PyList_Check(list)) return; for (Py_ssize_t i = 0; i < PyList_Size(list); ++i) { PyObject *tuple = PyList_GetItem(list, i); if (!PyTuple_Check(tuple) || PyTuple_Size(tuple) != 5) continue; PyObject *o; o = PyTuple_GetItem(tuple, 0); if (!o || !PyString_Check(o)) continue; std::string name = PyString_AsString(o); o = PyTuple_GetItem(tuple, 1); if (!o || !PyString_Check(o)) continue; std::string value = PyString_AsString(o); o = PyTuple_GetItem(tuple, 2); if (!o || !PyString_Check(o)) continue; std::string type = PyString_AsString(o); if (type.empty()) type = Metadata::inferType(value); o = PyTuple_GetItem(tuple, 3); if (!o || !PyString_Check(o)) continue; std::string description = PyString_AsString(o); PyObject *submeta = PyTuple_GetItem(tuple, 4); MetadataNode child = m.add(name, value, type, description); if (submeta) addMetadata(submeta, child); } }
void HexBin::done(PointTableRef table) { m_grid->processSample(); m_grid->findShapes(); m_grid->findParentPaths(); std::ostringstream offsets; offsets << "MULTIPOINT ("; for (int i = 0; i < 6; ++i) { hexer::Point p = m_grid->offset(i); offsets << p.m_x << " " << p.m_y; if (i != 5) offsets << ", "; } offsets << ")"; m_metadata.add("edge_length", m_edgeLength, "The edge length of the " "hexagon to use in situations where you do not want to estimate " "based on a sample"); m_metadata.add("threshold", m_density, "Minimum number of points inside " "a hexagon to be considered full"); m_metadata.add("sample_size", m_sampleSize, "Number of samples to use " "when estimating hexagon edge size. Specify 0.0 or omit options " "for edge_size if you want to compute one."); m_metadata.add("hex_offsets", offsets.str(), "Offset of hex corners from " "hex centers."); if (m_outputTesselation) { MetadataNode hexes = m_metadata.add("hexagons"); for (HexIter hi = m_grid->hexBegin(); hi != m_grid->hexEnd(); ++hi) { using namespace boost; HexInfo h = *hi; MetadataNode hex = hexes.addList("hexagon"); hex.add("density", h.density()); hex.add("gridpos", lexical_cast<std::string>(h.xgrid()) + " " + lexical_cast<std::string>(h.ygrid())); std::ostringstream oss; // Using stream limits precision (default 6) oss << "POINT (" << h.x() << " " << h.y() << ")"; hex.add("center", oss.str()); } } std::ostringstream polygon; polygon.setf(std::ios_base::fixed, std::ios_base::floatfield); polygon.precision(m_options.getValueOrDefault<uint32_t>("precision", 8)); m_grid->toWKT(polygon); m_metadata.add("boundary", polygon.str(), "Boundary MULTIPOLYGON of domain"); /*** We want to make these bumps on edges go away, which means that we want to elimnate both B and C. If we take a line from A -> C, we need the tolerance to eliminate B. After that we're left with the triangle ACD and we want to eliminate C. The perpendicular distance from AD to C is the hexagon height / 2, so we set the tolerance a little larger than that. This is larger than the perpendicular distance needed to eliminate B in ABC, so should serve for both cases. B ______ C / \ A / \ D ***/ double tolerance = 1.1 * m_grid->height() / 2; m_metadata.add("smoothed_boundary", Geometry::smoothPolygon(polygon.str(), tolerance), "Smoothed boundary MULTIPOLYGON of domain"); double area = Geometry::computeArea(polygon.str()); double density = (double) m_count / area ; m_metadata.add("density", density, "Number of points per square unit"); m_metadata.add("area", area, "Area in square units of tessellated polygon"); }
void HexBin::done(PointTableRef table) { m_grid->processSample(); try { m_grid->findShapes(); m_grid->findParentPaths(); } catch (hexer::hexer_error& e) { m_metadata.add("error", e.what(), "Hexer threw an error and was unable to compute a boundary"); m_metadata.add("boundary", "MULTIPOLYGON EMPTY", "Empty polygon -- unable to compute boundary"); return; } std::ostringstream offsets; offsets << "MULTIPOINT ("; for (int i = 0; i < 6; ++i) { hexer::Point p = m_grid->offset(i); offsets << p.m_x << " " << p.m_y; if (i != 5) offsets << ", "; } offsets << ")"; m_metadata.add("edge_length", m_edgeLength, "The edge length of the " "hexagon to use in situations where you do not want to estimate " "based on a sample"); m_metadata.add("estimated_edge", m_grid->height(), "Estimated computed edge distance"); m_metadata.add("threshold", m_grid->denseLimit(), "Minimum number of points inside a hexagon to be considered full"); m_metadata.add("sample_size", m_sampleSize, "Number of samples to use " "when estimating hexagon edge size. Specify 0.0 or omit options " "for edge_size if you want to compute one."); m_metadata.add("hex_offsets", offsets.str(), "Offset of hex corners from " "hex centers."); std::ostringstream polygon; polygon.setf(std::ios_base::fixed, std::ios_base::floatfield); polygon.precision(m_precision); m_grid->toWKT(polygon); if (m_outputTesselation) { MetadataNode hexes = m_metadata.add("hexagons"); for (HexIter hi = m_grid->hexBegin(); hi != m_grid->hexEnd(); ++hi) { HexInfo h = *hi; MetadataNode hex = hexes.addList("hexagon"); hex.add("density", h.density()); hex.add("gridpos", Utils::toString(h.xgrid()) + " " + Utils::toString((h.ygrid()))); std::ostringstream oss; // Using stream limits precision (default 6) oss << "POINT (" << h.x() << " " << h.y() << ")"; hex.add("center", oss.str()); } m_metadata.add("hex_boundary", polygon.str(), "Boundary MULTIPOLYGON of domain"); } /*** We want to make these bumps on edges go away, which means that we want to elimnate both B and C. If we take a line from A -> C, we need the tolerance to eliminate B. After that we're left with the triangle ACD and we want to eliminate C. The perpendicular distance from AD to C is the hexagon height / 2, so we set the tolerance a little larger than that. This is larger than the perpendicular distance needed to eliminate B in ABC, so should serve for both cases. B ______ C / \ A / \ D ***/ double tolerance = 1.1 * m_grid->height() / 2; double cull = m_cullArg->set() ? m_cullArea : (6 * tolerance * tolerance); SpatialReference srs(table.anySpatialReference()); pdal::Polygon p(polygon.str(), srs); pdal::Polygon density_p(polygon.str(), srs); // If the SRS was geographic, use relevant // UTM for area and density computation if (srs.isGeographic()) { // Compute a UTM polygon BOX3D box = p.bounds(); int zone = SpatialReference::calculateZone(box.minx, box.miny); auto makezone = [] (int zone) -> std::string { std::ostringstream z; // Use WGS84 UTM zones z << "EPSG:327" << abs(zone); return z.str(); }; SpatialReference utm(makezone(zone)); density_p = p.transform(utm); } if (m_doSmooth) p = p.simplify(tolerance, cull); std::string boundary_text = p.wkt(m_precision); m_metadata.add("boundary", boundary_text, "Approximated MULTIPOLYGON of domain"); m_metadata.addWithType("boundary_json", p.json(), "json", "Approximated MULTIPOLYGON of domain"); double area = density_p.area(); double density = (double) m_count/ area ; if (std::isinf(density)) { density = -1.0; area = -1.0; } m_metadata.add("density", density, "Number of points per square unit (total area)"); m_metadata.add("area", area, "Area in square units of tessellated polygon"); double moving_avg(0.0); double avg_count(0.0); double hex_area(((3 * SQRT_3)/2.0) * (m_grid->height() * m_grid->height())); int n(0); point_count_t totalCount(0); double totalArea(0.0); for (HexIter hi = m_grid->hexBegin(); hi != m_grid->hexEnd(); ++hi) { HexInfo h = *hi; totalCount += h.density(); totalArea += hex_area; ++n; } double avg_density = totalArea /(double) totalCount; m_metadata.add("avg_pt_per_sq_unit", avg_density, "Number of points " "per square unit (tessellated area within inclusions)"); double avg_spacing = std::sqrt(1/density); m_metadata.add("avg_pt_spacing", avg_spacing, "Avg point spacing (x/y units)"); }
void RdbReader::readMetadata(RdbPointcloud &reader, MetadataNode root) { struct Converter // simple JSON to MetadataNode converter { static void add( MetadataNode& parent, const std::string& name, const std::string& value ) { Json::Value node; Json::Reader reader; if (reader.parse(value, node)) add(parent, name, node); else parent.add(name, value); } static void add( MetadataNode& parent, const std::string& name, const Json::Value& node ) { if (node.isNull()) { parent.add(name, ""); } else if (node.isBool()) { parent.add(name, node.asBool()); } else if (node.isInt()) { parent.add(name, node.asInt64()); } else if (node.isUInt()) { parent.add(name, node.asUInt64()); } else if (node.isDouble()) { parent.add(name, node.asDouble()); } else if (node.isString()) { parent.add(name, node.asString()); } else if (node.isObject()) { MetadataNode object = parent.add(name); for (const std::string& name: node.getMemberNames()) { add(object, name, node[name]); } } else if (node.isArray()) { for (const Json::Value& item: node) { add(parent, name, item); } } } }; // get database object using namespace riegl::rdb::pointcloud; riegl::rdb::Pointcloud& rdb = reader.pointcloud(); // basic database information { QueryStat stat = rdb.stat(); GraphNode tree = stat.index(); // database ID and number of points MetadataNode node = root.add("database"); node.add("uuid", rdb.getUUID()); node.add("points", tree.pointCountTotal); // XYZ bounds (if available) Eigen::Vector4d minimum, maximum; if (reader.getBoundingBox(minimum, maximum)) { MetadataNode bounds = node.add("bounds"); MetadataNode min = bounds.add("minimum"); MetadataNode max = bounds.add("maximum"); min.add("X", minimum[0]); max.add("X", maximum[0]); min.add("Y", minimum[1]); max.add("Y", maximum[1]); min.add("Z", minimum[2]); max.add("Z", maximum[2]); } } // database transactions { const auto list = rdb.transaction().list(); for (const auto& item: list) { const auto details = rdb.transaction().details(item); MetadataNode node = root.add("transactions"); node.add("id", details.id); node.add("rdb", details.rdb); node.add("title", details.title); node.add("agent", details.agent); node.add("comments", details.comments); node.add("start", details.start); node.add("stop", details.stop); Converter::add(node, "settings", details.settings); } } // point attributes { const auto list = rdb.pointAttribute().list(); for (const auto& item: list) { const auto details = rdb.pointAttribute().get(item); Converter::add(root, "dimensions", details.save()); } } // meta data { MetadataNode node = root.add("metadata"); const auto list = rdb.metaData().list(); for (const auto& item: list) { const auto details = rdb.metaData().get(item); Converter::add(node, item, details); } } }