A computer engineering course. Engineering techniques for parallel processing. Providing the knowledge and hands-on experience in developing applications software for processors on inexpensive widely-available computers with massively parallel computing resources. Multithread shared memory programming with OpenMP. NVIDIA GPU multicore programming with CUDA and Thrust. Using NVIDIA gaming and graphics cards on current laptops and desktops for general purpose parallel computing using linux.

# 3.  Recent announcements

## 3.1  Northeast Flooding - Irene and Lee website

http://northeastflooding.us/ has info on the effect of Irene and Lee on the northeast US. It was sponsored by NSF RAPID grant 1158899, PI: Tom Zimmie, co-PIs: Barb Cutler and WRF. (12/30/13)

## 3.2  2D Local-topological center of gravity

The formulae in this section display best in Firefox. When using Chrome, some math chars appear blank at some zoom levels. Some versions of Explorer really mess things up.

Here are new local topological formulae for the center of gravity of a polygon and polyhedron.

For a polygon:

Let {$\vec{M}$} be the 1st order moment. Then

{$$\vec{M} = \sum_{\{(\vec{P},\hat{T}, \hat{N}) \} } \left\{ \right.$$}

{$$\left. \frac{1}{6} \left( \vec{P} \cdot \hat{T}\right) \ \left(\vec{P} \cdot \hat{N}\right) \ \left( \vec{P} + \left( \vec{P}\cdot \hat{N}\right) \ \hat{N} \right) \right\}$$}

Let {$\vec{C}$} be the center of gravity, and {$A$} be the area. Then

{$$\vec{C} = \vec{M} \ / \ A$$}

For a polyhedron:

Let {$\vec{M}$} be the 1st order moment. Then {$\vec{M} =$}

{$$- \frac{1}{24} \sum_{\{(\vec{P},\hat{T}, \hat{N}, \hat{B}) \} } \left\{ \left( \vec{P} \cdot \hat{T}\right) \ \left(\vec{P} \cdot \hat{N}\right) \left( \vec{P} \cdot \hat{B} \right) \right.$$}

{$$\left. \left( \vec{P} + \left( \vec{P}\cdot \hat{N}\right) \ \hat{N} + 2 \left( \vec{P}\cdot \hat{B}\right) \ \hat{B} \right) \right\}$$}

Let {$\vec{C}$} be the center of gravity, and {$V$} be the volume. Then

{$$\vec{C} = \vec{M} \ / \ V$$}

These formulae are preliminary are may have errors.

This is part of a research theme of using the least amount of explicit information when processing polygons or polyhedra. A little more information is at Local Topology, which also references Union, although I need to expand them.

104-dimacs03-union.pdf and other papers and talks lists there describe the formulae for edge length, area, and volume.

However, in brief, a polygon is represented as a set of occurrences of incidences of vertices and edges, {$\{(\vec{P},\hat{T}, \hat{N}) \}$}, where {$\vec{P}$} is a vertex, {$\hat{T}$} is a unit tangent vector pointing from the vertex along an adjacent edge, and {$\hat{N}$} is a unit normal vector pointing from the edge towards the inside of the polygon. There is one triple for each case of an adjacent vertex and edge. For an N-gon, there are 2N triples.

For a polyhedron, we have {$\{(\vec{P},\hat{T}, \hat{N}, \hat{B}) \}$}, which also has {$\hat{B}$}, the binormal vector. A cube has 48 quadruples, one for each incidence of a vertex, edge, and face.

Note that the complete edges are not stored, although they are implicit. The polygon may have multiple nested or separate components and even non-manifold vertices.

(12/3/13)

## 3.3  Keynote talk at GeoInfo 2013, XIV Brazilian Symposium on GeoInformatics

This talk will survey specialized algorithms, libraries, and development environments to facilitate processing huge geoinformatic databases on modern hardware.

Modern hardware is generally parallel, with the most economical platform, with the cheapest entry point, being NVidia GPUs. Desktop computers with current graphics cards, and even most laptop computers sold today, contain this platform, which was intended for realistic graphics, but which is also accessible by a programmer. Now, almost anyone can access a platform running thousands of parallel threads; the major problem being, "now what?".

Large datasets prefer data structures and algorithms like the uniform grid with linear expected compute time and I/O performance. With dataset sizes lying farther out on the asymptotic size-cost curve, even N*log(N) cost is prohibitive.

Modern hardware is often parallel, which favors simple regular data structures and algorithms. Modern hardware may have hundreds of gigabytes of main memory, which is surprisingly affordable. Despite being ignored by most researchers, it permits internal algorithms that access the data randomly, in contrast to the more limited external algorithms, to solve large problems.

Sophisticated development environments like Matlab and Mathematica can increase research productivity. They permit ideas to be realized in fewer lines of code, and may embed the best numerical techniques with implementation details hidden. This optimizes the often most important cost, the researcher's own time.

Specialized libraries can solve difficult problems and enhance productivity. CGAL, the Computational Geometry Algorithms Library, enables computing in the field of rational numbers extended with square roots. Here, lines and circles may intersect without any roundoff error (at a considerable cost in execution time). The Thrust package makes CUDA easier to use, and is even efficient. CUSP uses CUDA to process sparse matrices.

However, life is not perfect with these tools. They often do not play well with each other. The CUDA compiler can't parse C++11. GPUs have only modest amounts of high-speed memory. Tools developed for the current hardware may be buggy, badly documented, and not work on the latest hardware. Once you have committed to using a tool, switching away is difficult. Tools may become orphans whose development ceases. Commercial tools, with their captive audiences, get ever more expensive. In some domains, such as C++ compilers, the free tools are so good that the commercial tools have almost vanished.

A selection of these tools should be part of any productive GIS researcher's toolkit. However, since they do have learning curves, a modest amount of agency money spent producing tutorial material and providing programming support might increase the community's productivity.

(11/26/13)

## 3.4  FWCG2013 extended abstract

We present a parallel implementation of UNION3, an algorithm for computing the volume, surface area, center of mass, or similar properties of a boolean combination of many polyhedra. UNION3 has been implemented on the special case of the union of congruent isothetic cubes, and tested on 100,000,000 cubes. The algorithm uses a volume formula that does not require first computing the explicit union. All that is needed is the set of output vertices, including each vertex's position and neighborhood. UNION3's computation does not use randomization or sampling, and its output is exact. The implementation is parallel, using OpenMP. When using 32 threads, UNION3's elapsed time is only a few minutes, and the speedup, compared to using one thread, is a factor of ten. When executed on uniform random i.i.d input, UNION3's execution time is linear in the number of output vertices. UNION3 is intended as an example of a parallel geometry algorithm whose techniques should be broadly applicable.

(10/10/13)

here. (9/24/13)

## 3.7  New results for UNION3

UNION3 computes the volume and area of the union of many cubes: Processing {$10^8$} cubes of edge size 1/400 (of the universe size) took 5000 CPU seconds on a 3.4GHz Xeon.

Details: The data structure was an {$800^3$} uniform grid. The program used 100GB 57GB of main memory. The output volume was 0.765 (where the universe's volume is 1). If the cubes had not intersected at all, the output volume would have been 1.56 (meaning that the average point was inside 2 cubes). The output surface area was 831 (compared to 3750 if the cubes did not intersect at all). Of the 800,000,000 input vertices, 186,366,729 were visible, that is, not inside any cube, and so were output vertices. There were another 395,497,686 output vertices that were the intersection of 3 of the 600,000,000 input faces, and 811,328,383 output vertices that were the intersection of one of the 600,000,000 input faces with one of the 1,200,000,000 input edges. (Note: only those intersection that were not inside any input cube formed an output vertex.)

I listed the various comparisons to show that the sample dataset had many overlaps, and so was nontrivial and would exercise any boolean combination program.

UNION3 parallelizes very well. Using OpenMP with 2 processors containing 16 cores and 32 threads on the above 100M cubes example cut the elapsed time by over a factor of 10, down to 1740 473 elapsed seconds. The parallel version is probably correct because it computes same numbers as the serial version.

UNION3's execution time depends on the number of intersections. For {$10^8$} smaller cubes of size {$1/500$}, the parallel elapsed clock time was reduced to 1237 396 seconds (using a {$1000^3$} grid and 50GB of memory), because there were fewer intersections.

UNION3 scales down. Processing {$1,000$} cubes of size {$1/10$}, to compute a union volume of 0.573 and area of 19, took 0.04 0.02 seconds. Therefore it could be used to efficiently test for collisions among moving rigid objects, each approximated as a union of cubes, as follows. Compute the volume of the union of the set of possibly colliding objects. If it changes, the objects have started to overlap.

The talk mentioned in the previous announcement is a good intro to the principles. More details are here: Union, though that page has not been updated with the most recent results.

The significance of this result is as follows.

1. This implementation is a merely proof of principle. This is necessary because it is not obvious a priori that the ideas really work. Cubes were chosen because there is no round-off error when they intersect.
2. The concepts extend to the union of any set of polyhedra, with general topology (multiple nested shells of faces containing multiple nested loops of edges).
3. Since a general Boolean operation can be expressed in terms of unions and intersections, and since intersections are easier than unions, this can be extended to compute any mass property of any Boolean operation of a set of polyhedra.
4. As shown, and in contrast to most geometric algorithms, the algorithm is quite parallelizable.
5. This would appear to compare favorably with existing methods.
6. UNION3 does use a lot of memory to process large examples, but we believe any other method would use considerably more.

Next step: Try CUDA.

(updated 9/14/13 and 9/29/30 with newer results).

## 3.8  Please report any problems with my web pages.

They are complicated enough that things, which have been working for years, suddenly break w/o my knowing it.

## 3.9  Broken links fixed

If you access my site as http://wrfranklin.org/ instead of http://www.ecse.rpi.edu/Homepages/wrf/ then URLs that are not inside http://www.ecse.rpi.edu/Homepages/wrf/pmwiki, including pageLinks from my wiki to my web pages outside the wiki, are broken. (There is an extra /wrf in the generated link.) I'm working on it.

Fixed. (8/7/13)

## 3.10  MathJax fixed.

LaTeX math (using MathJaX) should be working again. (8/7/13)

## 3.11  Search my wiki

You can search my wiki by using the search box at the top right of most pages. (This does not search my pages that are not part of the wiki, such as very old pages. However, google works there.) (8/12/13)

## 3.12  Old redirects deleted

I'm deleting the redirects from several obsolete URLs to their current locations. I don't know of anything that this breaks. However, if you get a new 404 not found, please tell me. (8/7/13)

# 4.  Professional summary

• BSc (Toronto).
• AM, PhD, Applied Math (Harvard).
• Program Director, Numeric, Symbolic, and Geometric Computation Program, CISE, National Science Foundation, 2000–2002.
• Visiting Professor, UC Berkeley, 1985–1986.
• Visiting positions at Genoa, Laval, CSIRO Canberra, National University of Singapore, 1992–1993.
• supervised 16 PhD and 68 masters graduates: list.

# 5.  Research

## 5.1  Summary

Geometry has been my overriding interest since high school in the 1960s. Geometry is the "branch of mathematics that deals with the measurement, properties, and relationships of points, lines, angles, surfaces, and solids" 1. The Geo in geometry is from the Greek Γη meaning, ''earth, ground, land''. 2. My major recently concluded project was Geo*, a DARPA–funded project for representing and operating on terrain, that is, elevation.

My big long-term unsolved problem is to devise a mathematics of terrain, which would respect its physical properties. To date, I've been nibbling around the edges.

One recently ended project3 attempted to predict how erosion occurs in levee failure by overtopping, and, after a failure, to reverse-simulate what happened.

A earlier major project was Geo*, funded by DARPA, studied representing and operating on terrain, that is, elevation.

I've applied the same underlying principles in Computational Geometry producing algorithms useful for large datasets, mostly in 3D, and usually implemented.

Both topics are applications of my long term theme of emphasizing small, simple, and fast data structures and algorithms. Note that efficiency in both space and time can become more important as machines get faster. This research is applicable to computational cartography, computer graphics, computational geometry, and geographic information science.

16 PhD students (7 currently employed at a college), and 68 masters students have been graduated under my advisement, (names and theses).

My research has been externally funded by the National Science Foundation under Grants ENG-7908139, ECS-8021504, ECS-8351942, CCF-9102553, CCF-0306502, DMS-0327634, CMMI-0835762 and IIS-1117277 by DARPA/DSO, via the NGA, under the GeoStar program, by the US Army Topographic Engineering Center, and by IBM, Sun Microsystems, and Schlumberger-Doll Research.

Many of the algorithms have been implemented. The code is available for nonprofit research and education. RPI Computer graphics group

## 5.3  Geo* accomplishments

This was my 2005–2009 DARPA project. Details are here. A good summary is this Geo* talk (7/2010), Videos in talk: 1, 2, 3.

Key points include these.

1. efficient hi–res visibility computation on terrain,
2. multiple observer siting to maximize joint viewshed,
3. ODETLAP, an extension of the Laplacian PDE to an overdetermined system of equations, which is used in many of the following results,
4. extremely compact lossy terrain (elevation) compression,
5. terrain compression that reconstructs slopes accurately,
6. lossily compressed terrain supports motion–planning (path planning),
7. path planning with sophisticated cost metric on large terrain, and
8. a better surface fitting procedure for bathymetry data that is very unevenly spaced.

# 6.  Recent Masters and PhD grads

## 6.1  Doctoral Graduates

1. Tsz-Yam (Eddie) Lau, Two-step ODETLAP and induced terrain framework for improved geographical data reconstruction, Nov 2012.
2. Chris Stuetzle, Representation and generation of terrain using mathematical modeling, 2012.
3. You Li, CUDA–accelerated HD–ODETLAP: a high dimensional geospatial data compression framework, 2011.

... and 13 more.

## 6.2  Masters Graduates

1. Jeffrey Sult, Computational analysis of first–person shooter levels, Apr 2011.
2. Michael J Snyder, Using The HTML5 Canvas Element For A Web–Based Multi–User Painting Application, Apr 2011.
3. Luke Perkins, An Integrated Approach to Choke Point Detection and Region Decomposition, 2010.

... and 65 more.

# 7.  Recent Publications, Talks, Etc.

1. W. Randolph Franklin. Algorithms, libraries, and development environments to process huge geoinformatic databases on modern hardware. . Geoinfo 2013, XIV Brazilian Symposium on GeoInformatics, 26 Nov 2013. Keynote talk, http://www.geoinfo.info/geoinfo2013/index.php. (talk).
2. Randolph Franklin. Adaptive grids then and now. In Barry Wellar, editor, AutoCarto Six Retrospective, pages 119-124. Information Research Board Inc., 2013. (paper).
3. Chaulio R. Ferreira, Marcus V. A. Andrade, Salles V. G. Magalhães, W. R. Franklin and Guilherme C. Pena. A Parallel Sweep Line Algorithm for Visibility Computation. In Geoinfo 2013, XIV Brazilian Symposium on GeoInformatics, Campos do Jord\~ao, SP, Brazil, 24-27 Nov 2013. Winner of best paper. http://www.geoinfo.info/geoinfo2013/index.php. (paper).
4. W. Randolph Franklin. Research Summary Slides. , Sep 2013. (paper).
5. W. Randolph Franklin. Geometric Operations on Millions of Objects. . (talk), 24 July 2013. (paper).
6. Lau, Tsz-Yam and Franklin, W. Randolph. River network completion without height samples using geometry-based induced terrain. Cartography and Geographic Information Science, 29 Apr 2013. online. (paper).
7. Wenli Li, W. Randolph Franklin and Daniel Benedetti. Parallel Multiple Observer Siting on Terrain. In 23rd Fall Workshop on Computational Geometry, City College, New York City, USA, 25-26 Oct 2013. extended abstract. (paper).
8. Daniel Benedetti, W. Randolph Franklin and Wenli Li. CUDA-Accelerated ODETLAP: A Parallel Lossy Compression Implementation. In 23rd Fall Workshop on Computational Geometry, City College, New York City, USA, 25-26 Oct 2013. extended abstract. (paper).
9. W. Randolph Franklin. Parallel Volume Computation of Massive Polyhedron Union. In 23rd Fall Workshop on Computational Geometry, City College, New York City, USA, 25-26 Oct 2013. extended abstract. (paper, talk).
10. Invalid BibTex Entry in BibSummary! /wrf.bib salles-hydroinformatica-2013
11. W. Randolph Franklin, You Li, Tsz-Yam Lau and Peter Fox. CUDA-accelerated HD-ODETLAP: Lossy high dimensional gridded data compression. In Xuan Shi and Volodymyr Kindratenko and Chaowei Yang, editor, Modern Accelerator Technologies for Geographic Information ScienceSpringer, 2013. (paper).
12. Thiago L. Gomes, Salles V. G. Magalhães, Marcus V. A. Andrade, W. Randolph Franklin and Guilherme C. Pena. Computing the drainage network on huge grid terrains. In 1st ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data (BigSpatial-2012), Redondo Beach, CA, 6 Nov 2012. (paper).
13. Tsz-Yam Lau and W. Randolph Franklin. Automated artifact-free seafloor surface reconstruction with two-step ODETLAP (Ph.D. Showcase). In 20th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS 2012), Redondo Beach, CA, 6-9 Nov 2012. (paper).
14. Chaulio R. Ferreira, Salles V. G. Magalhães, Marcus V. A. Andrade, W. Randolph Franklin and André M. Pompermayer. More efficient terrain viewshed computation on massive datasets using external memory. In 20th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS 2012), Redondo Beach, CA, 6-9 Nov 2012. (paper).
15. W. Randolph Franklin, You Li, Tsz-Yam Lau and Peter Fox. CUDA-accelerated HD-ODETLAP: Lossy high dimensional gridded data compression. In 2012 International Workshop on Modern Accelerator Technologies for GIScience (MAT4GIScience 2012), Columbus OH, 18 Sep 2012. (paper, talk).
16. Christopher Stuetzle and W. Randolph Franklin. Representation of terrain data by drilling process. In 2012 AutoCarto International Symposium on Automated Cartography, Columbus OH, 16-18 Sep 2012. abstract. (abstract, talk).
17. Tsz-Yam Lau and W. Randolph Franklin. Improving river network completion under absence of height samples using geometry-based induced terrain approach. In 2012 AutoCarto International Symposium on Automated Cartography, Columbus OH, 16-18 Sep 2012. (paper, talk).
18. Christopher Stuetzle and W. Randolph Franklin. Representing terrain with mathematical operators. In 15th International Symposium on Spatial Data Handling, Bonn, Germany, 22-24 Aug 2012. http://www.sdh2012.org/. (paper, talk).
19. Tsz-Yam Lau and W. Randolph Franklin. Better completion of fragmentary river networks with the induced terrain approach by using known non-river locations. In 15th International Symposium on Spatial Data Handling, Bonn, Germany, 22-24 Aug 2012. http://www.sdh2012.org/). (paper, talk).
20. Mehrad Kamalzare, Thomas F. Zimmie, Christopher Stuetzle, Barbara Cutler and W. Randolph Franklin. Computer simulation of levee's erosion and overtopping. In International Symposium on Environmental Geotechnology, Energy and Global Sustainable Development, Los Angeles, California, USA, Jun 2012. (paper).
21. Mehrad Kamalzare, Christopher Stuetzle, Zhongxian Chen, Thomas F. Zimmie, Barbara Cutler and W. Randolph Franklin. Validation of erosion modeling: physical and numerical. In Geo-Congress 2012: Annual congress of the geo-institute of ASCE, Oakland, California, USA, 25-29 Mar 2012. http://www.geocongress2012.org/. (paper).
22. Salles V. G. Magalhães, Marcus V. A. Andrade, W. Randolph Franklin and Guilherme C. Pena. A new method for computing the drainage network based on raising the level of an ocean surrounding the terrain. In Jérome Gensel and Didier Josselin and Danny Vandenbroucke, editor, Bridging the Geographic Information Sciences: International AGILE'2012 Conference, pages 391-407. Springer, 24-27 April 2012. Winner of the Best Paper Award (2nd place). (URL) (paper).
23. Tsz-Yam Lau, You Li and W. Randolph Franklin. Joining fragmentary river segments with elevations and water flow directions using induced terrain (extended abstract). In 21st Fall Workshop on Computational Geometry, City College, New York City, USA, 4-5 Nov 2011. (paper).
24. Christopher Stuetzle, Barbara Cutler, Zhongxian Chen, W. Randolph Franklin, Mehrad Kamalzare and Thomas Zimmie. Ph.D. showcase: Measuring terrain distances through extracted channel networks. In 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS 2011), Chicago USA, 1-4 Nov 2011. (paper, poster).
25. Mehrad Kamalzare, Zhongxian Chen, Christopher Stuetzle , Barbara Cutler, W. Randolph Franklin and Thomas F. Zimmie. Computer simulation of overtopping of levees. In 2011 Pam-Am CGS Geotechnical Conference: 14th Pan-American Conference on Soil Mechanics and Geotechnical Engineering, Toronto, 2-6 Oct 2011. (URL) (paper).
26. Salles V. G. Magalhães, Marcus V. A. Andrade and W. Randolph Franklin. Multiple Observer Siting in Huge Terrains Stored in External Memory. International Journal of Computer Information Systems and Industrial Management (IJCISIM), 3, 2011. (URL) (paper).
27. Tsz-Yam Lau and W. Randolph Franklin. Completing fragmentary river networks via induced terrain. Cartography and Geographic Information Science, 38(2):162-174, Apr 2011. (paper).
28. W. Randolph Franklin. The RPI GeoStar project. In Anne Ruas, editor, Proceedings of the 25th International Cartographic Conference, Paris, 3-8 July 2011. online, retrieved 26-Oct-13. (URL) (paper, talk).
29. Zhongxian Chen, Christopher S. Stuetzle, Barbara M. Cutler, Jared A. Gross, W. Randolph Franklin and Thomas F. Zimmie. Analyses, Simulations and Physical Modeling Validation of Levee and Embankment Erosion. In Geo-Frontiers 2011: Advances in geotechnical engineering, Dallas TX, 13-16 March 2011. (URL) (paper).

# 8.  Recent grants

1. NSF IIS-1117277: CGV: Small: Towards a mathematics of terrain, sole PI: WRF. $500,000, 8/1/2011–7/31/2014. Terrain, in this project, is defined as the elevation of the earth's surface above some reference geoid. Over the last few decades, ever larger quantities of terrain data with higher accuracy in (x, y) and z have become available. Improved bathymetry data of the sea floor has also been collected, and elevation data for other planets and their satellites is now available (for a generalized definition of "terrain"). The PI's goal in this project is to develop and validate a new mathematical representation of terrain, which will be closer to the physics of how terrain is formed and be designed to represent legal realistic terrain more easily than unrealistic terrain. Aside from constituting an interesting application of deeper mathematics in its own right, such a foundation for terrain representation that is geologically sound will enable the design of operators such as compression and siting from first principles. This work will generalize and extend the PI's previous successful terrain representation and algorithms work, such as ODETLAP. The new terrain representation will be a sequence of parameterized transformations of various classes inspired by the physics of how terrain is formed. Modeling the real world, the transformations will be nonlinear (e.g., real river valleys cannot be superimposed and added). Nonlinearity is powerful, but difficult to study. The first class of transformations, called scooping, will model how river valleys form, and will guarantee to produce only hydrologically valid terrain. Erosion, deposition and hill creation transformations will also be studied. Each class of transformation has many design options; for example, should fewer and more powerful, rather than many but less powerful, transformations be used? The PI's goal is to encode the terrain in as few bits as possible while satisfying, in addition to RMS error, richer, application–dependent, metrics such as multi–observer siting to maximize viewshed, and then path planning to avoid those observers. Hydrological accuracy and visual recognizability are other metrics. This project continues the PI's collaboration with Professor Marcus Andrade at the Federal University of Vicosa in Brazil. Project outcomes will be validated by means of extensive tests on real terrain databases. Broader Impacts: The simplest implication of this work will be more compact terrain compression algorithms. Thus, this research will allow larger terrain datasets to be accessed and processed by consumers in portable products such as GPS navigators. Easier access to large terrain databases will facilitate a probability distribution over possible realistic terrain, which in turn will allow optimizing operations such as multi–observer siting and path planning (the former has applications ranging from cell phone tower siting to surveillance, while the latter is important for energy conservation during transportation). Hydrological applications of better large terrain data include floodplain planning (flood damage in the US amounted to$50,000,000,000 during the 1990s). Through involvement of graduate students in the PI's research and through his graduate courses, this project will also help to increase the educated workforce in a foundational discipline that is important to American productivity and future economic prosperity.
2. NSF CMMI–1158899: RAPID: Flood and Erosion Reconnaissance: Hurricanes Irene and Lee, Upstate New York and Western New England, PI: Tom Zimmie, co–PIs: Barb Cutler and WRF. \$30,123, 9/20/2011–8/31/2012.

# 10.  Travel Photos

Snowshoeing, Feb 2009
 Kayaking, hiking and sleeping in hammocks in the Amazonian rainforest, north of Manaus, July 2009.
 Arriving in Zermatt 7/7/2006 after spending 11 days hiking 164 km from Chamonix.
 Photographed by the Google streetview car.

(3/10/13) WRF_Orlova_HudsonBay.mp4 is a video of my trying to walk to the bow of the Lyubov Orlova during a stormy crossing of Hudson Bay in 2007. The crew used my camera to film me from the bridge.

(It was announced that the Lyubov Orlova, a 90m former cruise ship in the Arctic, now derelict and drifting across the Atlantic, may have capsized.)

# 12.  Fall 2013 Computer Graphics courses at RPI

## 12.1  ECSE-4750 Computer Graphics, Fall 2013, WRF

This fall will be 80% the same as Computer Graphics Fall 2012. The change may be to reduce the immediate mode OpenGL, add a little on OpenGL 3 and buffer objects, add a little libQGLviewer (a wrapper for OpenGL programs to make viewing easier), add more programming, and maybe add a little CUDA. Students will be expected to be able to add new packages, like libQGLviewer, to their computers and to learn enough of them to write a hello world program.

OpenGL 2 will still be taught in addition to OpenGL 3. It is more useful for general graphics programming, easier to use, more widely available (even some Thinkpad laptops manufactured in 2011 don't do OpenGL 3), and has better online examples, than OpenGL 3. The resistance that OpenGL 3 is facing from programmers shows that others feel the same.

The text, Guha, will be same because it is only a few years old, has lots of programming examples, and is used by other leading universities like U Md College Park.

Students in CS and GSAS with a strong programming background are welcome.

## 12.2  ARTS 4964 Art & Code & Interactivity CRN: 47963

Prof Shawn Lawson is teaching this interesting course, presented from an arts viewpoint.

Create anything, seriously. This course will introduce an open source, cross-platform programming library, OpenFrameworks, to create interactive experiences and artworks. OpenFrameworks is created by artists and programmers and used by artists and programmers worldwide for museum installations, VJ-ing, projection mapping, interactive experiences, and more. Topics addressed: basic programming, drawing graphics, video, audio, various hardware inputs, and Kinect. Questions asked: Is code an art form? What is interactive art? Is software, art? Prerequisite: ARTS 1020 (MS: Imaging) or permission of instructor. Cr: 4 Instr: Lawson, http://www.shawnlawson.com/