Comparing Performance for a Fifo Chart yields unexplained outcome. Fastest Renderer: high quality renderer. Slowest: direct3d.
My problem is not that they don’t reach the tested performance but that the high quality renderer looks fastest. I would expect direct3d to be the fastest.
Can anyone explain this or possibly find an error in my code or reasoning?
I attach sample code.
First picture is in debug build and the second in release build. In release the software renderers are not so far apart any more but still noticeable.
Explanation of test:
I used the FifoSample from the samples suite and changed it for performance tests.
There are 2 projects. One is the control in WpfControl using SciChart the other one is hosting the control in WpfControlHost. (This setup is only this complicated for testing because it will by my setup later.)
I got the following outcome. Essentially high quality renderer is fastest. Taking less time. Frequency is the highest. Then comes the high performance renderer and finally the direct3d renderer.
What is the test supposed to do (if I didn’t make a mistake!):
The goal is to reach 110.000 Hz sample rate (worker_DoWork.hertz * this._multiplier). Putting in 10 values at once for each series on an iteration round. Iterations try to reach 11.000/sec.
In source code: (method: worker_DoWork)hertz = 11.000 and this._multiplier = 10
Measurements are done by Microsoft StopWatch Class where 1second is documented as = StopWatch.Ticks/StopWatch.Frequency (of Processor).
To switch the renderers I uncomment the sections in UserControl1.xaml and run a test in debug or release build without Visual Studio attached.