We have netcdf file for weather data present with us and want to visualise it over immerse can this be possible.
We have converted file into readable format and can insert all values present in db.
Currently we are planning dash by plotly to visualise is this a correct approach or there is any other way out.
Pls help with suggestions.
Everything inserted as a point, line or polygon can be rendered by Immerse.
Have you succeeded in inserting the NetCDF file into OmnisciDb? Could you share the table and a subset of data?
The most performant way of drawing maps is a back-end renderer approach, so Immerse+back end renderer of omnisci db would provide adequate performances; probably also bokeh library has a back-end renderer but I haven’t any experience of it.
How can we help you? Have you had some difficulties while using Immerse, or ingesting the data? Let us know
we are in the process of adding direct netCDF reading support. The core code is already in 5.10 and eventual syntax already documented under ‘raster’ syntax for COPY FROM operations. However we contributed some fixes to GDAL which took a month or so to land, so netCDF in particular should land at 6.0. Anyway, sounds like you figured out a path already, presumably via python libs. This should be faster and more convenient, but will yield the same data. Direct Immerse support to the reading without SQL COPY FROM should be in either 6.0 or 6.1 (its close to done).
Shouldn’t need plotly, but can visualize directly in Immerse (even done with your data imported from python). As Candido mentioned, if you are doing yourself, you need to figure out which geometry column type to create. We accept “well known text” (WKT) or columnar lon,lat for points.
Points are the most obvious and render fine for a narrow scale range when you tweak the point size in the map renderer. However one rectangular polygon per pixel is advisable if the data cell sizes are relatively large (like 1km2+). This obviously costs 4x as much storage and GPU RAM. But its still blazingly fast to render, and this approach works across all map scales cleanly without fiddling. (At 6.1, we plan to support fixed ground sample distance renders on points, so you can have your cake and eat it too!)
In general, plotly has more chart types and is obviously easier to generate/call from python. But architecturally, it is a “front end render” library. So for all of your users, the full data to be visualized must be sent across the wire. Above a certain scale in terms of data size or users, this slows down in a nonlinear way. The benefit to Immerse in this kind of context is that you are sending a compressed binary image of a few kilobytes per client, rather than many megs of a typical NetCDF per client. Also a specific benefit if you are ‘slicing’ into a big data volume as is common with netCDF - we compile and send your query to the server’s GPU, so subset query is super fast and the only data going to the renderer is exactly what is needed for the visualization.
Many things depend on this data scale. But one of our utilities clients is using Immerse to visualize multiple netCDF weather model runs in comparison with each other. Each one is about 750G. So at that scale, you don’t even want to serve a single client by pushing the full dataset across. The question you need to ask up front relative to production is just basically “if I’m successful, how will this scale.” If you only need to serve max 5 clients at a time and netCDFs are relatively small, then you of course have many options.
Anyway, best of luck and please let us know if we can be of further assistance. I was planning on blogging a bit more about this once we’ve landed 6.0/6.1. So I’d love to learn more about your use case.
Thanks for the suggestion, we have quickly moved forward with this use and have successfully created dashboard for this use case. By any chance can we do interpolation or use algorithms like K nearest neighbor KNN to present dash board the way they are looking for.
Glad to hear you were able to move quickly in creating a dashboard. On your questions, at 6.0 release we have user-defined table functions, including ‘tf_geo_rasterize’. If you have point data, this will interpolate it. At 6.1, we’re currently planning to add KNN as another table function.