![]() ![]() but I have not found anything better to date. I've been suggesting this technique to people from time to time but received a lot of negative rap for it being a bit of a hack, requiring the purchase of additional software, etc, etc. ![]() Personally, I would not recommend the PDF exporter for complex maps, maps with transparencies and maps containing rasters. The PDF exporter in ESRI works reasonably well for exporting vector maps with no transparencies. Adobe is not cheap but there is simply no comparison when it comes to accurate conversion and on-screen display. Adobe Acrobat works best, other 3rd party PDF makers are not as good I've tested a few with mixed results either the resulting PDF was larger, or appeared more down-sampled or the on-screen viewer rendering was subpar. (Important note: this only works well with TIFF) You can tweak the rendering options in Adobe if need be but you will get the best and most importantly the smallest result (in file size) using this workaround. It may of course depend not just on the file format and the above factors, but on the drivers for that format and your software.Considering the fact that you are exporting some raster data as well the best technique, IMHO, is to NOT use the ArcGIS PDF exporter at all but rather export to a high resolution TIFF instead and then convert the tiff using Adobe Acrobat Pro renderer to PDF. Sorry, but you'll probably have to benchmark depending on your access pattern, rather than getting a one-size-fits-all. Sometimes uncompressed approaches gain here because it may be trivial to compute where any given part of the image resides within the file, especially if you need only part of a very large row, but compression can be done in a granular fashion that works well for some access patterns. If you need to process a subset of a very large file, then a format which groups the subset you want closer together may be faster - eg: tiles or a format which can compute offsets. Any of the binary file formats will probably be pretty close (ASCII will be slower). If you will load it all into memory, then you will be doing mostly sequential access, and the fastest format will be a tossup between plain and compressed storage (depending on things like how fast your CPU is versus disk). Some folks find memory management and code structure in Python preferable over native R-perhaps take a look at RPy2 or PypeR as either may be appropriate for your analysis use.Ī big question is whether you are going to read the entire raster from the file into memory before processing it, or whether the file is so large that you will process it incrementally, or process some subset of the overall file. You'll find a fair bit of coverage on SO and various OSGEO and R forums/blogs/wiki.īut as this is a GIS forum where Python use is in relative ascendency, I'll note that there are advantages to working with raster data in a Python numpy array, with similar dependence on the gdal libraries for raster loading, conversion and export. Need to work out format dependencies there to make your raster choice. With principal reliance on the gdalwarp command. And subsequently-what is the most efficient output format from R assuming you'd be outputting results back to raster.Įither way, if you're going to work with raster in R you will likely be using the rgdal and R ncdf packages to supplement what is contained in the R raster package. ![]() Maybe it is really not a question of which raster image format has better opening benchmarks-rather which raster image formats are most efficient raster source formats for opening and reading as input into an R numerical array. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |