Fanstatic

October 15, 2011

A Pyramid-based framework we work on at Camptocamp uses pyramid_formalchemy and its companion module, fa.jquery. The latter relies on Fanstatic.

Fanstatic is basically a WSGI middleware that can inject script and link tags in HTML pages, produced deeper in the WSGI stack. Any WSGI application or middleware wrapped by a Fanstatic middleware can call need() on Fanstatic resources to instruct Fanstatic to inject script or link tags for these resources. Fanstatic is simple, easy to use, and well documented.

But I’ve been wondering what you can do with Fanstatic that you can’t with Mako or any other template engine. With Fanstatic you can insert scripts and styles based on some request-dependent conditions. You can also have a single place in the code where resources are inserted, thereby avoiding duplications in the HTML files composing your web site. But these are things you can also do with a template engine. One of the goals of template engines is indeed to avoid duplicating (HTML) code, by placing common code in template pieces, and having the template engine put them together to form the actual HTML page.

I actually see one case where Fanstatic could be particularly useful: one needs to extend/decorate pages that you don’t create yourself, because they’re produced by a library you rely on. (fa.jquery is one of these libraries.) With Fanstatic you can let the library create the page, and have Fanstatic inject scripts and styles in the page for you. But, if the lib doesn’t use Fanstatic for inserting its own scripts and styles you won’t be able to control where Fanstatic will insert your scripts and styles – Fanstatic will insert them either at the very top or at the very bottom of the page, which can be a problem. Fanstatic could provide options to give the application developer more control on where resources are inserted, but it would never provide the needed flexibility. If the lib you rely on uses Fanstatic you can create Fanstatic resources that depend on the lib’s resources, and thereby have Fanstatic inject resources in the desired order.

As a conclusion I still have some doubts about the actual usefullness of Fanstatic, but they’re mitigated by the aforementioned “uncontrolled pages” case. And I may discover other use cases as I go.

Papyrus

July 9, 2011

A few days ago I pushed papyrus_mapproxy on Github. The objective of papyrus_mapproxy is to make it easy to embed MapProxy in Pyramid apps.

This new module is a good opportunity for me to describe what I’ve been up to with Papyrus.

I have developed five Papyrus modules: papyrus, papyrus_tilecache, papyrus_mapproxy, papyrus_ogcproxy, and papyrus_mapnik. The last four are companion modules for the first one.

I wrote these modules to learn Pyramid, and assess its extensibility, with the goal to eventually provide extensions that will ease the work of Pyramid developers working on mapping apps.

The main module, papyrus, provides conveniences for creating feature web services. For example, it provides a GeoJSON renderer, and a full implementation of the MapFish Protocol.

The papyrus_tilecache and pyramid_mapproxy modules make it easy to embed TileCache and MapProxy in Pyramid apps, respectively.

The papyrus_mapnik module aims to ease using Mapnik in Pyramid apps. This module is experimental, and would need some work to be actually useful.

The papyrus_ogcproxy provides a proxy service for OGC protocols. It was developed for working around the Same Origin Policy implemented in browsers.

I believe there’s high value in embedding services, like tile rendering and caching services, in the web application. That can greatly ease deployment. It also allows leveraging transverse layers of the application, like the security layer.

Building a consistent, well integrated, and scalable application that requires external independent services is, to say the least, a big challenge I think. Assembling different types of services within a single application, relying on horizontal scaling, is much more appealing to me.

Anyway, any feedback on Papyrus is welcome!

OpenLayers sandbox dev with git svn

November 10, 2010

I’ve been using git-svn for OpenLayers development for some time now. Although git-svn isn’t so easy to work with, I’m quite happy with it for OpenLayers.

So I have a git-svn clone of http://svn.openlayers.org/trunk/openlayers on my development machine. I use this git repository for “trunk work”, that is mainly for bug fixes meant to go to trunk. When I start working on a bug fix, I create a temporary branch, naming it with the id of the corresponding trac ticket. When I have patch reviewed and accepted, I merge the temporary branch into the master, dcommit, and remove the temporary branch.

These days, I’ve been working on more experimental things (Kinetic Dragging). Using OpenLayers SVN sandboxes is nice for developing new features and experimenting, because they allow you to easily show and share your work. So I needed a way to have my experimental box is a sandbox while still managing my code with git-svn. And here’s what I did.

I started by creating a sandbox in the OpenLayers SVN repository:


$ svn cp http://svn.openlayers.org/trunk/openlayers http://svn.openlayers.org/sandbox/elemoine/kinetic

Then, I added a new svn-remote in my OpenLayers Git repository’s .git/config:


[svn-remote "svn-kinetic"]
url = http://svn.openlayers.org/sandbox/elemoine/kinetic
fetch = :refs/remotes/git-svn-kinetic

and fetched changes from that remote branch with:


$ git svn fetch svn-kinetic -r 10884

10884 is the number of the SVN revision created when the sandbox directory was built with svn cp. This command created a remote branch named git-svn-kinetic, and listed when entering git branch -r


$ git branch -r
git-svn
git-svn-kinetic

Then I checked out the freshly-created remote branch, and created a local branch from it:


$ git checkout git-svn-kinetic
$ git checkout -b kinetic

The local branch “kinetic” is bound to the remote branch “git-svn-kinetic”, and “git svn dcommit” commands done with the “kinetic” branch checked out go to http://svn.openlayers.org/sandbox/elemoine/kinetic, which is exactly what I want.

Function decorators in JavaScript

August 2, 2010

I’ve been looking at how to implement function decorators in JavaScript. The FireFox Sync extension (http://hg.mozilla.org/services/fx-sync) provides a nice implementation. I’m going to describe that implementation in this post.

So let’s assume we have an application with “classes” (constructors and prototypes, really), and we always want the same behavior when exceptions occur in these classes’ methods.

Our classes look like:

MyCtor = function() {};
MyCtor.prototype = {
    method: function(a, b) {
        // do something with a and b
    }
};

The common behavior is implemented at a single place in a decorator function:

var decorators = {
    catch: function(f) {
        return function() {
            try {
                f();
            } catch(e) {
                console.log(e);
            }
        };
    }
};

decorators.catch is the decorator function. It returns a function that executes the decorated function (f) in a try/catch block and logs a message if an exception occurs.

Decorating method with decorators.catch is done as follows:

MyCtor.prototype = {
    method: function(a, b)
        decorators.catch(function() {
            // do something with a and b
        })()
};

method now calls our decorator, and the actual logic of the method
is moved in an anonymous function passed to the decorator. The anonymous function can still access the arguments a and b thanks to the closure.

You may be wondering why decorators.catch delegates the decoration to an inner function as opposed to doing it itself. This is to be able to chain decoration. For example:

 MyCtor.prototype = {
    method: function(a, b)
        decorators.lock(decorators.catch(function() {
            // do something with a and b
        }))()
};

where decorators.lock would be a new decorator of ours.

I guess there are other ways to implement function decorators in JavaScript. I find this one is simple and elegant.

Server-side OpenLayers

March 14, 2010

I’ve been interested in server-side JavaScript lately. As a proof of feasibility (to myself) I’ve put together a node.js-based web service that gets geographic objects from PostGIS and provides a GeoJSON representation of these objects.

For this I’ve used node.js, postgres-js and OpenLayers.

node.js is a lib whose goal is “to provide an easy way to build scalable network applications”. node.js relies on an event-driven architecture (through epoll, kqueue, /dev/poll, or select). I’d recommend looking at the jsconf slides to know more about the philosophy and design of node.js.

The “Hello World” node.js web service looks like that:

var sys = require('sys'),
      http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.write('Hello World');
  res.close();
}).listen(8000);
sys.puts('Server running at http://127.0.0.1:8000/');

Thanks to node.js’s nice interface I think the code is pretty much self-explained.

Assuming the above code is included in a file named file.js, starting the web service is done with

$ node file.js

Now we can use postgres-js to read data from a PostGIS table. postgres-js sends SQL queries to PostgreSQL through TCP. postgres-js is a node.js module, so it can be loaded with require() (just like the built-in sys and http modules).

var sys = require('sys'),
    http = require('http'),
    Postgres = require('postgres');

var db = new Postgres.Connection("dbname", "username", "password");

http.createServer(function (req, res) {
    db.query("SELECT name, astext(geom) AS geom FROM table", function (objs) {
        res.writeHead(200, {'Content-Type': 'text/plain'});
        res.write("it works");
        res.close();
    });
}).listen(8000);
sys.puts('Server running at http://127.0.0.1:8000/');

The last step involves using OpenLayers for deserializing from WKT and serializing to GeoJSON. To use OpenLayers in the node.js application, and load it with the require() function, I packaged OpenLayers as a node.js module. It was easy enough, see the modules doc.

And here’s the final code:

var sys = require('sys'),
    http = require('http'),
    Postgres = require('postgres'),
    OpenLayers = require('openlayers').OpenLayers;

var db = new Postgres.Connection("dbname", "username", "password");

http.createServer(function (req, res) {
    db.query("SELECT name, astext(geom) AS geom FROM table", function (objs) {
        var features = [];
        var wkt = new OpenLayers.Format.WKT();
        for(var i=0,len=objs.length; i<len; i++) {
            features.push(
                new OpenLayers.Feature.Vector(
                    wkt.read(obj[i].geom).geometry, {name: obj[i].name}
                )
            );
        }
        var geojson = new OpenLayers.Format.GeoJSON();
        var output = geojson.write(features);
        res.writeHead(200, {'Content-Type': 'application/json'});
        res.write(output);
        res.close();
    });
}).listen(8000);
sys.puts('Server running at http://127.0.0.1:8000/');

The End. Happy server-side JavaScript to all.

Y Combinator

July 2, 2009

I’ve been reading “The Little Schemer” from Daniel P. Friedman and Matthias Felleisen. Very interesting reading.

The nineth chapter introduces the Y Combinator function, a pretty interesting beast! Quoting

Testing

June 30, 2009

I’ve been reading about testing. Here are a few words on my thoughts about testing.

From my reading and understanding there are three types of tests:

  • Unit tests: a unit test tests a single function (e.g. an object method). A unit test must take care of isolating the tested function from the functions the tested function normally relies on (when executed outside any test).
  • Integration tests: an integration test tests if two or more dependent functions correctly work together.
  • User-acceptance tests: a user-acceptance tests whether a given function provides the behavior its users expect. User Interface tests belong to this type.

These three types of tests are complementary, they all have their importance when testing an application.

In OpenLayers, GeoExt, and MapFish (its JavaScript library), we provide unit and integration tests, and actually don’t distinguish whether they’re of the unit or integration type (they’re all referred to as unit tests, which is fine I think). Not providing user-acceptance tests makes sense, as OpenLayers, GeoExt and MapFish are libraries as opposed to applications. The three libraries come with examples that in some way are user-acceptance tests. (In OpenLayers we’ve attempted to create actual user-acceptance tests, but developpers haven’t paid much attention to them, possibly their scopes and goals haven’t been well defined.)

Applications built with OpenLayers and/or GeoExt and/or MapFish instantiate classes from these libraries. Often, most of their code doesn’t include actual logic, and from that regard writing unit and integration tests for such applications doesn’t make sense. However, as User Interfaces, these applications would deserve user-acceptance tests.

Providing automated User Interface tests is in my opinion a very difficult task, and I’d be very interested in having feedback from others on that.

MapFish and GeoExt

April 19, 2009

Matt Priour recently asked about the future of the client part of MapFish, and more specifically whether it will be replaced by GeoExt. This is actually a question that every MapFish user should be asking🙂. Anyway I thought an answer to that question could make a post on my blog. There it is.

The short story: the client part of MapFish will not be replaced by GeoExt.

Now the longer story. As of today the client part of MapFish includes OpenLayers, Ext, and the MapFish JavaScript lib. The latter is itself composed of two parts: core and widgets.

  • core includes classes that are independent of Ext; most of them extend OpenLayers classes like OpenLayers.Control, OpenLayers.Protocol, OpenLayers.Strategy, etc. For example the client-side implementation of the MapFish Protocol is part of core.
  • widgets includes Ext-based classes, mostly GUI components (but not only, the FeatureReader and stuff are part of widgets). widgets also has stuff that’s directly related to the server side of MapFish, the print widgets are a good example.

GeoExt will not replace core, nor will it replace the widgets components that rely on MapFish web services. But basically every new Ext-based component that isn’t tied to any server-side stuff is going into GeoExt.

In addition to OpenLayers and Ext, MapFish will include GeoExt. We had initially planned to integrate GeoExt into MapFish earlier, but finally decided to let things settle down a bit in GeoExt before doing the integration. We’re currently doing that integration, and we will gradually be deprecating classes as their equivalents are added into GeoExt. For example, the work on FeatureRecord, FeatureReader and FeatureStore we’ve been doing in GeoExt will deprecate the FeatureReader, FeatureStore and LayerStoreMediator classes in the MapFish JavaScript lib.

Also, MapFish, as a framework, aims to provide an integrated solution. For client-side development, this means that the developer doesn’t need to download Ext, OpenLayers and GeoExt, install them within his application, and think about the organization of his application. Instead, we want that applications created with the MapFish framework are well organized from their creations; with the Ext, OpenLayers, GeoExt and MapFish libs ready, with the JavaScript build tool ready, with the unit test suite ready, etc. I guess I will cover this topic in a later post…

Wooo, two posts in two days, scarry…🙂

Additions to the MapFish Protocol

April 18, 2009

We recently added new stuff to the MapFish Protocol.

As a refresher, let’s first take a look at what the MapFish Protocol had before the new additions.

(Note that you’d need the JSONovich FireFox extension to see the output of the examples given below in your web browser.)

Geographic query params

  • box={x1},{y1},{x2},{y2}: the features within the specified bounding box
  • geometry={geojson_string}: the features within the specified geometry
  • lon={lon}&lat={lat}&tolerance={tol}: the features within the specified tolerance of the specified lon/lat

Examples:

Limiting and Sorting

  • limit={num}: the maximum number of features returned
  • offset={num}: the number of features to skip
  • order_by={field_name}: the name of the field to use to order the features
  • dir=ASC|DESC: the ordering direction

Examples:

The new params

  • no_geom=true|false: so that the returned feature has no geometry (“geometry”: null)
  • attrs={field1}[,{field2},...]: to restrict the list of properties returned in the features
  • queryable={field1}[,{field2},...]: the names of the feature fields that can be queried
  • {field}__{query_op}={value}: filter expression, field must be in the list of fields specified by queryable, query_op is one of “eq”, “ne”, “lt, “le”, “gt”, “ge”, “like”, “ilike”

And now an example combining all the new parameters:

The above query returns a GeoJSON representation of the summits whose names include “col” and whose elevations are greater than or equal to 3500. The returned features have no geometry and their attributes include “name” and “elevation” only.

Not including the geometry in the features makes the parsing in the browser much faster, so for cases where the geometries aren’t needed this is a big win.

Credits for the “queryable={field}&{field}__{query_op}={value}” syntax goes to FeatureServer!

Secure TileCache With Pylons and Repoze

February 15, 2009

This post shows how one can secure TileCache with Pylons and Repoze.

In a Pylons application one can run a WSGI application from within a controller action. Here is a simple example:

    class MainController(BaseController)
        def action(self, environ, start_response):
            return wsgiApp(environ, start_response)

TileCache is commonly run from within mod_python. TileCache can also be run as a WSGI application, therefore it can be run from within the controller action of a Pylons application. Here’s how:

    from TileCache.Service import wsgiApp

    class MainController(BaseController)
        def tilecache(self, environ, start_response):
            return wsgiApp(environ, start_response)

Pretty cool… But it gets really interesting when repoze.what is added to the picture. For those who don’t know repoze.what, repoze.what is an authorization framework for WSGI applications. repoze.what now provides a Pylons plugin, making it easy to protect controllers and controller actions in a Pylons application. Here’s how our tilecache action can be protected:

    from TileCache.Service import wsgiApp
    from repoze.what.predicates import has_permission
    from repoze.what.plugins.pylonshq import ActionProtector

    class MainController(BaseController)
        @ActionProtector(has_permission('tilecache'))
        def tilecache(self, environ, start_response):
            return wsgiApp(environ, start_response)

With the above, anyone who tries to access /tilecache will have to be granted the tilecache permission. Otherwise, authorization will be denied.

TileCache is secured!

People often want finer-grained authorization, like give certain users access to certain layers. With Pylons’ routing system this can be easily and elegantly achieved using repoze.what, I will show that in a later post.