(original) (raw)

Hi Sylvain,

On 16\. Aug 2018, at 13:29, Sylvain Corlay <sylvain.corlay@gmail.com> wrote:

Actually, xtensor-python does a lot more in terms of numpy bindings, as it uses the C APIs of numpy directly for a number of things.

Plus, the integration into the xtensor expression system lets you do things such as view / broadcasting / newaxis / ufuncs directly from the C++ side (and all that is in the cheat sheets).

ok, good, but my point was different. The page in question is about Python as a glue language. The other solutions on that site are general purpose binding solutions for any kind of C++ code, while xtensor-python is xtensor-specific. xtensor in turn is a library that mimics the numpy API in C++.

The docs say:
"Xtensor operations are continuously benchmarked, and are significantly improved at each new version. Current performances on statically dimensioned tensors match those of the Eigen library. Dynamically dimension tensors for which the shape is heap allocated come at a small additional cost."

I couldn't find these benchmark results online, though, could you point me to the right page? Google only produced an outdated SO post where numpy performed better than xtensor.


That is because we run the benchmarks on our own hardware. Since xtensor is explicitly SIMD accelerated for a variety of architectures including e.g. avx512, it is hard to have a consistent environment to run the benchmarks. We have a I9 machine that runs the benchmarks with various options, and manually run them on raspberry pis for the neon acceleration benchmarks (continuous testing of neon instruction sets are tested with an emulator on travisci in the xsimd project).

Ok, but you can still put the results for everyone to see and judge by themselves on a web page. Just state on what kind of machine you ran the code. It is ok if the results on my machine differ, I am still interested in the results that you get on your machines and since you generate them anyway, I don't see why not.

\[tensor\] is clearly a very overloaded term.

I agree that vector is a very overloaded term (the STL vector is particularly to blame). But until recently, tensor used to be a well-defined technical term which exclusively referred to a specific mathematical concept

https://en.wikipedia.org/wiki/Tensor\_(disambiguation)
https://medium.com/@quantumsteinke/whats-the-difference-between-a-matrix-and-a-tensor-4505fbdc576c
https://www.quora.com/What-is-a-tensor

Then Google started to popularise the term wrongly in Tensorflow and now another well-defined technical term gets watered down. xtensor is going with the tensorflow, sad.

Best regards,
Hans