Taking many attempts to broaden my development languages and platform knowledge, I have become keenly aware that most of the data, statistics and especially spatial worlds that operate beyond the enterprise are taking advantage of many sets of libraries, languages, and tools that are a bit alien when coming from a primarily .NET/Java background.
A few months back I tried to try to get into Python, and explore the scientific libraries of NumPy and SciPy, in trying to handle future DEM interpolation and assembly. It turns out Python isn’t all that hard to pick up (despite the syntax being very different from the C-style syntax of Java, Javascript, C++ and C#), but the libraries and integration methodologies took a bit longer to try to “figure out what to do with them”.
While my Python exploration has taken a back seat for the time being, I was recently turned on to using the R programming language – which was a whole new experience for me. I had only heard of it within the past year with regards to data analysis and statistics stuff, but further exploration of R proved to yield some fascinating results.
First, learning how the heck R worked was a challenge. It took me a bit to figure it out, and finally downloading and running RStudio is what helped out immensely. Available for Windows, Mac or UNIX, I started off in Windows thinking it would be easier but quickly migrated back to my native OS X.
The first thing to understand about R is that it’s basically the scripting language for the R environment, very similarly to how VBA is the scripting language for the Microsoft Office environment. R isn’t a stand-alone programming language. It’s explicitly tied to the R environment and is your interface for controlling it. While I have not used MATLAB, if I understand correctly, the correlation between R and the R environment is very similar to MATLAB’s scripting language and environment.
The second thing I had to grasp was the syntax of the language. I think the first thing that tripped me up was seeing the usage of a dot (.) contained within variable names, and assuming they referred to methods or properties as if they were a class. This was completely untrue. Apparently the underscore is not permitted in R for variable names, so the convention became to use the dot in variables (why Camel-Case didn’t win out here is beyond me, but let’s move on). If you want to access properties of an object, instead of dot, you use dollar sign ($).
The other real tricky thing was the assignment operator. It’s not the equals sign (=), its ‘<-‘ as in:
myvar <- x * y
There are also no line terminators (semicolons, tabs, spaces, etc) Once you get past a few of these basics, you can move on pretty quickly.
So, let’s get moving. I wanted to actually SEE something pretty quickly. Just to understand how easy it might be to use R. I opened RStudio and eventually cobbled together a few lines of code that used some built in library data, to testing out 3d plotting functions using the library rgl (which plots points using OpenGL).
Inside RStudio, you can enter the following:
library(rgl)</span> <span style="color: #0000ff;">library(akima) </span> <span style="color: #0000ff;">data(akima) </span> <span style="color: #0000ff;">rgl.spheres(akima$x, akima$z, akima$y, 0.5, color="blue")
These 4 lines do 3 things – the first two lines load the library functions in ‘rgl’ (an OpenGL visualization lib) and ‘akima’ (an interpolation lib). This is akin to an ‘import’ or ‘using’ statement in other languages. You can also substitute ‘library’ with ‘require’ and get the same result.
The third line un-lazily loads sample data from akima into a ‘DataSet’, which from what I understand is not too popular a convention in actual practice, but for demo purposes, I was OK with this.
The fourth line creates spheres of a diameter of 0.5 and a color of blue using rgl, with the x, y, and z’s mapped to the akima dataset’s x, y, and z values.
After hitting enter in the console, you will notice a window pop-up (in OS X, this will be running in X-Windows). Hey look! Blue dots! Actually, if you zoom in you will see that they are spheres:
These correspond to x,y,z values in space based on the sample akima data. Want to see those data values? type akima at the prompt, you will see the data as follows:
>
akima</span> $x [1] 11.16 24.20 12.85 19.85 10.35 24.65 19.72 15.91 0.00 20.87 6.71 3.45 [13] 19.99 14.26 10.28 4.51 17.43 22.80 0.00 7.58 16.70 6.08 1.99 25.00 [25] 14.90 3.22 0.00 9.66 2.56 5.22 11.77 17.25 15.10 25.00 12.13 25.00 [37] 22.33 11.52 14.59 15.20 7.54 5.23 17.32 2.14 0.51 22.69 25.00 5.47 [49] 21.67 3.31 $y [1] 1.24 16.23 3.06 10.72 4.11 2.40 1.39 7.74 20.00 20.00 6.26 12.78 [13] 4.62 17.87 15.16 20.00 3.46 12.39 4.48 1.98 19.65 4.58 5.60 11.87 [25] 3.12 16.78 0.00 20.00 3.02 14.66 10.47 19.57 17.19 3.87 10.79 0.00 [37] 6.21 8.53 8.71 0.00 10.69 10.72 13.78 15.03 8.37 19.63 20.00 17.13 [49] 14.36 0.13 $z [1] 22.15 2.83 22.11 7.97 22.33 10.25 16.83 15.30 34.60 7.54 30.97 41.24 [13] 14.72 10.74 21.59 15.61 18.60 5.47 61.77 29.87 6.31 35.74 51.81 4.40 [25] 21.70 39.93 58.20 4.73 50.55 40.36 13.62 6.43 12.57 8.74 13.71 12.00 [37] 10.25 15.74 14.81 21.60 19.31 26.50 12.11 53.10 49.43 3.25 0.60 28.63 [49] 5.52 44.08
50 points total. Go ahead, count those spheres. OK don’t do that, there are 50 spheres. OK, so you’re thinking great – I have 50 spheres in 3D space – now what? Well, I imported the akima library specifically to test out interpolation. Important to note, is that obviously this is (obviously) an irregular grid. So interpolation practices will work a little bit differently here.
Next, let’s interpolate some lines into a new vector and then display them in rgl as a surface for the points:
lines <- interp(akima$x, akima$y, akima$z, xo=seq(min(akima$x), max(akima$x), length = 40), yo=seq(min(akima$y), max(akima$y), length = 40)) rgl.surface(lines$x, lines$y, lines$z, color="green", alpha=c(0.5))
You will see something like this:
This is starting to look frighteningly like a DEM. Now what did we do here? Well, we created a new vector of lines that span a range of the x and y coordinates of our points in the akima data (x,y). Then we use a default “40 points per line” evenly spaced over the x-axis, and extrapolated. We then set these lines so form a surface mesh over the 3D points, paint them green and give them some alpha channel transparency. The result kind of makes it look like a terrain draped over a DEM, and essentially, that’s pretty much what it is.
Now let’s interpolate some points and add them in:
points <- interpp(akima$x, akima$y, akima$z, runif(200, min(akima$x), max(akima$x)), runif(200, min(akima$y), max(akima$y)))</span> rgl.points(points$x, points$z, points$y, size=4, color="red")
You should see something like this:
OK, so what just happened now? We created a new vector called ‘points’ and we used the akima interpp function, which is a function for pointwise bivariate interpolation for irregular data. We passed in our existing x,y, and z values from the akima data and we created an xo and yo range of 200 points of uniform density (the ‘runif’ function has nothing to do with “RUNning” anything, it’s r – uniform for uniform distribution across a range). Then, the interpp function takes care of finding out our z values for us. We pump it into our rgl display as points, though we could have easily done spheres or some other geometry, set them to a size of 4 and paint them red. And voila, we now have 250 points – 200 of them being interpolated via the akima bivariate algorithm.
So after I played around with this for a while, i really wanted to try to get some “real data” in the mix, to see what that would look like. So, I set out to open a geotiff that I had made from a set of LiDAR point cloud files. This is my DEM, and I wanted to see if I could make a mesh of the 3D points. Well, it turned out to be much easier than I had anticipated. Here’s what I ran:
library(raster)</span> <span style="color: #0000ff;">library(rgl)</span> <span style="color: #0000ff;">r <- raster("/Users/brian/Desktop/test.tif") p <- rasterToPoints(r) colnames(p) <- c("x", "y", "z") p <- data.frame(p) rgl.points(p$x, p$z, p$y, size=1, color="blue")
And here is the result, zoomed in a bit:
Pretty cool eh? It’s now pretty easy to visualize the point mesh and be able to see where over the 3D surface this was sample. Please be aware I am not accounting for spatial projection at all right now, this is a simple viewing mechanism for “visualizing” the 3D points stored in our file. This reminds me very similarly of how MeshLab would work for visualization. I basically convert my raster to points, name the columns (they “may” have names already, but just to be sure), set to the active data frame and draw the points.
So this is cool, but what If I want to visualize this in 2D to see a height differential? And heck, what if I even account for the correct projection? As it turns out, it’s not all that hard – we can use spplot (alternatively, you can use ggplot, or a few other plotting mechanisms):
library(raster) library(rgl)</span> <span style="color: #0000ff;">.getGridTop <- function(x) { rs <- res(x) orig <- bbox(x)[,1] + 0.5 * rs GridTopology(orig, rs, dim(x)[2:1]) }</span> <span style="color: #0000ff;">r <- raster("/Users/brian/Desktop/test.tif") grd <- .getGridTop(r) sp <- SpatialGridDataFrame(grd, proj4string=CRS('+init=epsg:3563'), data=data.frame(values(r), check.names=TRUE)) sp.plt <- spplot(sp[1], main="Elevation (in meters)", scales=list(draw = TRUE), col.regions = topo.colors(25)) print(sp.plt)
When I ran this against my file, I ended up with something that looked like this:
So now we’ve plotted and created a 2D elevation visualization heat map of elevations – and yes, the dark blue is 0 meters and that is sea level. This is a GREAT way to discern where buildings and other structures are located amongst your data, or to help plan for where data may need cleaning up during overlay – or better yet, what kind of interpolation you may want to perform.
So I know what you are probably thinking – this is great, but who cares? R is an isolated environment. How can I possibly integrate this into my existing C++/JavaScript/.NET/Python work and even remotely take advantage of it? Well, quite simple, there are many R bindings available. For example, R.NET allows for evaluation of R functions, and then conversion of data back to .NET for consumption. I will be looking to this for potential PROJ.4 conversions and interpolations in the near future, just as an easy way to get to using both environments – and sidestepping the hassle of integrating PROJ.4.NET plus all the other GDAL components. Plus this could address the issue of finding a good .NET-accessible interpolation library.
I look forward to future experimentation with R – including integration into .NET, as well as potential visualization for pre-processing components I am working with in the 2D and 3D mapping space. I think there’s much to explore here, and I’m looking forward to more experimentation.
– b