Tuesday, August 30, 2011

Thursday, August 18, 2011

Why Opencv is so fast

Convolution is the basis of many computer visions algorithms and straightforward algorithm to implement in C, but in the comparison of various implementations Opencv clearly comes out as the winner.

For the convolution of a 5x5 kernel with a 1000x1000 image of type float32 (time in ms):

opencv 5.43189048767
nidimage 36.602973938

And this factor of performance is visible in the implementations of other libraries as well, e.g. leptonica, theano.

It has been a goal of scikits.image to operate without too many explicit dependencies, so pulling in a fast convolution algorithm has been stated as a very desired goal.

The reason why opencv performs so well, is because of its use of SSE operators. In convolution where we apply the same operation on multiple data items the gains in perfomance are considerable.

The following command for example,

__m128 t0 = _mm_loadu_ps(S);

loads 4 values from the S pointer into the 128 bit register t0, and all operations on on this register operate on these values in parallel.

s0 = _mm_add_ps(s0, s1);


I have implemented a SSE based float32 convolution routine and though a bit slower than opencv, it diminishes the performance gap considerably. Each type needs some additional work, including support for row and column separable convolutions. With this we will get a good foundation for a fast convolution implementation.

Benchmark of current results for the test case:

scikits.image 11.029958725
opencv 5.04112243652
scipy.ndimage 43.2901382446

Wednesday, July 20, 2011

Video work

I've added video reading support with Opencv and GStreamer backends.

camera =CvVideo("test.avi")
#or camera = GstVideo("test.avi")
image = camera.get()

Extending the backend system to support switching classes would be great, similar to the way we handle functions but in a Video("test.avi", backend="opencv") kind of way.
The Opencv video duration retrieval methods are currently broken in Linux, but I implemented it anyway for when they fix it in the future. Until then, the functionality is there in GStreamer. Both implementations also play IP camera streams, so between them you are pretty much set to handle most cameras/codecs.

Monday, July 4, 2011

Backend Testing

The following is a code snippet of our testing system for Python Nose tests:

class TestSobel(BackendTester):
def test_00_00_zeros(self):
"""Sobel on an array of all zeros"""
result = F.sobel(np.zeros((10, 10), dtype=np.float32))
assert (np.all(result == 0))


When we inherit from BackendTester, tests are generated for all the implemented backends. If you have a missing dependency, the test will be skipped.

Friday, June 17, 2011

Low barrier to entry

An emphasis for our backend system has been to offer a real low barrier to entry for adding new backend functions. Therefore we have tried to avoid unnecessary configuration files, just a module that you drop in each submodules backend directory with naming convention doing all the rest.

Documentation for each function is updated to indicate what backends are available. To do that though without importing a module, we have resorted to quick parsing of the function definitions of a module.

At the start of the execution, the source tree is scanned and scikits.image.backends is populated with all found backends. This can be used for backend exploration.

use_backend("opencv")
use_backend(scikits.image.backends.opencv)

Scikits.image already has a IO plugin system with very similar though slightly more complicated infrastructure, and in the future it may be wise to merge the two systems. This will require two tiers of backends one for visualization and one for processing.

Wednesday, June 8, 2011

Decorators

We have looked at a few variations on the backend decorator theme, one uses the keyword argument style while another uses import techniques (with no resulting overhead). The importing style proved to be a bit unpythonic though, so we are going with a more explicit backend manager.

How we will handle documentation of a function and its backend is also a topic of discussion. The backend function documentation should be made available if that backend is in use, or at least a good indication that this backend is the one at service.

I also started with some color conversion backends, and I've found that implementations differ to some extent. HSV for example is handled quite arbitrarily between frameworks and this will hamper comparative testing. Even gray scale conversion offers subtle differences.

Monday, May 30, 2011

Initial backend work

We have started with the backend system. To recap, it will enable us to provide implementation backings for functions from various frameworks.

I've started with the Sobel operation, and implemented it in both opencv and opencl. At the same time I've reworked the numpy version as well so that identical results can be obtained. With different implementations at your disposable a great benefit is the way that you can test and benchmark different algorithms.

The working idea for now is to make a decorator that one can add to functions:

@add_backends
def sobel(image):
    # numpy implementation
    ...

This will add an optional backend implementation parameter to the function that the function will try to use:

# use the opencv sobel implementation
sobel(image, backend="opencv")
# use the opencl implementation on an available gpu
sobel(image, backend="opencl")

If the specified implementation is not found, we fall back to the default numpy backend. For global backend selections we are thinking of something in the following line:

use_backend("opencl")

This will try to use opencl wherever possible.

An alternative thought to our current setup is to specify it more explicitly:

opencv.sobel(image)

The following week we will try to finalize this API and implement a few more functions.