To avoid surprises for application developers, this change
creates a new getFeatures method. So it is clear now beforehand
whether features or feature info markup is returned. The result
is now also grouped by layer, so application developers always
have a link between a layer and the feature info it returns.
To make getFeatureInfo return markup for vector layers, this
change also adds a featureInfoFunction property to the vector
layer, which gives developers full control over how features are
rendered to feature info markup.
Currently, the dirty flag is never reset (to false). This is a bug. Because renderFrame is called very often (every layer render gets called when every other layer needs to re-render), it is criticial to know when we can bail out early. The dirty flag is currently the way that the vector layer renderer knows that it needs to do more work. On an empty cache, the renderFrame method of the vector layer renderer is called ~30 times for a single zoom in the vector layer example (due to tiles loading on other layers). Without this change, we miss the fast path out and clear/re-render the canvas all 30 times. With this change, we are only clear the canvas and redraw 6 times in a typical zoom animation.
The reason for this change is that symbolSizes and maxSymbolSize
on the instance will be wrong as soon as the resolution changes
and cached tiles are used. It turned out that the approach used
now has several advantages: smaller symbolSizes objects, no need
to merge symbolSizes objects, and cache management for free (no
risk of memory leaks). Note that the symbolSizes and
maxSymbolSize for each tile are not strictly tile specific -
they represent the rendering pass that created the tile. This
has no negative side effects, and it has the advantage that
there is not a single additional loop needed to create these
structures.
With this change, hit detection for lines and points gets very
accurate, because the vector renderer instance keeps track of
line widths and point symbol sizes. After doing a bbox query in
the RTree, returned lines and points are evaluated against the
thresholds of their line width or symbol size. The KML example
with its different symbolizers now has getFeatureInfo too to
show this in action.
This method is an entry point for getting feature information.
Renderers can use a hit canvas or defer to a layer (source) to
get matching features for a pixel.
For now this is only implemented for vector layers, and it uses
a bbox query because we cannot refine the result because of
missing geometry intersection functions yet.
Rendering vector tiles with mixed geometry types does not work
as expected, because the tile is created without the geometries
that need another rendering pass because of missing icons. This
was discovered by @bartvde when working on the KML parser, where
mixed geometry types are common.
This change fixes the issue by breaking out from rendering
entirely when renderFeaturesByGeometryType returns a deferred
state. In addition, there was a related bug because icons are
added to the cache regardless of its loaded state. This is also
fixed now.
The previous logic assumed that if there were any tiles to render, the dirty state should be false. The correct logic is to say that if we don't render during animation, dirty is true.
This avoids features being rendered multiple times when they
cross tile borders. Currently this makes the style-rules.html
example extremely slow. Fix for that to come in my next commit.
The RTree can easily maintain an additional index dimension,
by passing a type with each added item. Now instead of
maintaining an RTree for each geometry type, we have a single
RTree with a type filter. With this change, using the RTree
finally speeds up rendering as expected.
We need a more flexible event system. We could have a VectorLayerEvent type and dispatch 'featuresadded' here. But listeners want features typically and perhaps extent. This won't be true for all vector layer events (suggesting a more specific VectorFeatureEvent type or something).
Now vector layers can have a style. ol.Style instances have an
apply method to get the symbolizer literals for a feature. If the
layer does not have a style defined, there is also a static
applyDefaultStyle function on ol.Style to get the default
symbolizer literals for a feature. The vector layer also got a
groupFeaturesBySymbolizerLiteral method, which returns an array
with features grouped by symbolizer, as needed by the canvas
renderer.
If we have a gridded vector source, the grid should have something to do with the source data (e.g. the vector data is available in a regular grid). The vector layer renderer's internal grid is for rendering canvas tiles and doesn't have anything to do with the source.
We should discuss whether post render functions must be run after each render frame or not. If these can be run after multiple render frames, it would make sense to increase the timeout. As it is, it looks like post render functions are run for every render. Hard to see what the benefit is in this case.
* Tiles are now cut out of the sketch renderer in a separate
pass. This ensures that point features at tile borders appear
at both sides of the border. However, if such features get
added in a later tileRange rendering pass, tiles from a
previous rendering pass will still not have that feature.
* The tile canvas is only created once, and cloneNode(false) is
used to get a canvas for a new tile.
It looks like this approach will work well for panning (as anticipated). For animated zooming, it is not going to work as is. It looks like the canvas tile generation is too much for this type of animation loop. Though there are clearly still areas for optimization:
* Don't create new tiles while animating between zoom levels. Using existing tiles only while animating should bring a significant performance gain.
* Simple spatial index for tiles - each tile coord in the matrix could have a feature lookup object (keyed by id). This needs to account for rendered dimension (as witnessed by the point being cut by a tile). Given that the current example uses only three features, adding the spatial index should only be a minor improvement.
* Reuse a fixed set of canvas tiles that are generated at construction (and increased/decreased with view size changes).
* If a fixed set of tiles is not used, at least new ones could be cloned from existing ones (minor).
* Do some profiling to look for more ideas.
In addition, world-wrapping needs addressed. I don't think this renderer is the right (or at least the only) place to address this. And the cache of tiles needs to be managed for real. But hey, at least we've got a working tiled vector renderer now.
I think it's complicating things at this point to deal with this. Unfortunately, it's not proper dateline wrapping as is (only arbitrary tile range extent wrapping).
What we want in the end is vector tiles repeated just as raster
tiles. This change only avoids repeated tiles with the same
content being rendered and stored in the cache.