SciChart® the market leader in Fast WPF Charts, WPF 3D Charts, iOS Chart, Android Chart and JavaScript Chart Components
My group is evaluating SciChart for high performance realtime charting. We are testing line renderer with XyDataSeries.
We were able to get good performance, but we need to squeeze more and our scenario is a bit different from how XyDataSeries is used.
To make a long story short, we cannot append, because our application has 2 strict requirements:
which means put in XyDataSeries a new set of points at every step. This means either do:
var dataSeries = new XyDataSeries<double, float>(samples.Length)
dataSeries.Append(domain, samples);
m_renderableSeries.DataSeries = dataSeries;
or :
dataSeries.Clear();
dataSeries.Append(m_domain, samples);
(btw, the first one is slightly faster, 190ms vs 240ms to draw 10 million points)
Which is obviously working against how XyDataSeries is implemented.
A faster way would be to just do
var dataSeries = new ReadOnlyXyDataSeries(domain, samples);
m_renderableSeries.DataSeries = dataSeries;
Where ReadOnlyXyDataSeries just takes the samples array and without any copy makes it available to the renderer.
So I implemented ReadOnlyXyDataSeries as a IXyDataSeries<double, float>.
To my surprise however this performs much worse (900ms to draw 10
million points), while it should perform better (it is really just a
XyDataSeries without copy)
UPDATE: This is not true: I was setting IsSortedAscending = false on sorted data. Once I put it to true, preformances are back to exactly the same performance as XyDataSeries . Which is good but not stellar.
(hack time) I know there is room for performances, because I made the following (hacky, brutal, very bad) thing:
var internalList = (ISciList<float>)dataSeries.YValues;
Array.Copy(samples, internalList.ItemsArray, samples.Length);
instead of the Clear/Append pair, and it is much faster (30ms are shaved off). No copy should be even faster.
Obviously I am missing something. So, what am I missing? How should I implement a custom IXyDataSeries in a fast way? Is there another way?
Good morning Lorenzo,
Again my sincere apologies for our tardy response on this issue. I’d like to know why you need to throw away the whole DataSeries, however I appreciate that you do and will answer your question as best as I can.
The reason the first method is faster (DataSeries.Clear() than DataSeries.Append()) is internally the memory is not recreated. The memory is cleared but not recreated.
In the second method, new DataSeries(), you are creating a 160MByte buffer (10M points * 2 * 8 bytes) and garbage collecting another one. Despite the operations looking very similar, under the hood there are optimisations when you clear a DataSeries to re-use pooled memory.
When an analysing performance and ‘how can I improve it’ its important to understand the physical limitations of the computer. The memory bandwidth in a modern PC is about 16Gbytes / second. 10Million points of double/double data is 160MBytes, therefore it will take a minimum of 10ms to copy 160MBytes of data into memory, giving you a maximum frame rate of 100FPS. Don’t forget we have to read back this 160MBytes to draw it, so that’s another 10ms dropping the maximum frame rate to 50FPS, unless we do caching internally (which we do if data not changed). This is a physical wall and cannot be circumvented. There is no cheap or easy trick to make the computer outperform memory bandwidth.
So given this, how does DataSeries.Append() work under the hood? If the memory is already allocated (it is, if you clear() – values are cleared but memory is not destroyed, it is reused) then Append() simply fills those buffers again, however it does a little bit more than that. We internally inside DataSeries calculate the distribution of your data. We check if it is sorted ascending, we check if it is evenly spaced. We check if it contains NaN or null or empty points. All of these flags are required for later optimisation of drawing: by knowing the properties of data in advance we are able to select the best performing drawing algorithm for your situation: rather than apply a hammer to everything, we choose the right tool.
Now it is possible to hack (override) the Data Distribution calculation and specify those flags yourself. If you know the distribution of the data, you can tell SciChart and avoid this calculation entirely, making DataSeries.Append much faster. There is an example of this here.
The DataSeries.DataDistributionCalculator is a class which determines
the distribution of your data (sorted in X or not, evenly spaced in X
or not) and the flags are used to determine the correct algorithm(s)
for resampling, hit-testing and indexing of data.By default this all works automatically, however, if you want to save
a few CPU cycles and you know in advance the distribution of your
data, you can override the flags as follows:
var xyDataSeries = new XyDataSeries<double, double>()
{
// use this feature to override Data-distribution calculation and
// provide your own flags to save CPU cycles. Only use if you are certain about the
// distribution of your data
DataDistributionCalculator = new UserDefinedDistributionCalculator<double>()
{
IsSortedAscending = true,
IsEvenlySpaced = true,
}
};
If you would like to discuss further your requirements and how to squeeze precious CPU cycles out of SciChart, I’d welcome you to get in touch with us by using the http://www.scichart.com/contact-us page. We also have a project in-progress now (ETA early September) which is a total performance overhaul, which may help your project get the best from SciChart also.
Best regards,
Andrew
Please login first to submit.