SciChart® the market leader in Fast WPF Charts, WPF 3D Charts, and now iOS Charting & Android Chart Components

Welcome to the SciChart Community Forums!

Please use the forums below to ask questions about SciChart. Take a moment to read our Question asking guidelines on how to ask a good question and our support policy. We also have a tag=SciChart on Stackoverflow.com where you can earn rep for your questions!

Please note: SciChart team will only answer questions from customers with active support subscriptions. Expired support questions will be ignored. If your support status shows incorrectly, contact us and we will be glad to help.

Answered
1
0

I have just finished my first pass evaluation of the SciChart performance. When drawing a lesser number of points (<100000), SciChart outperforms two other packages I have evaluated. However, when drawing more points (200,000 – 2,000,000), SciChart does not meet the performance of the other packages.

The evaluation involved selecting a different number of lines and number of points per line. The transition to worse performance occurred in the following setups:

Line Count Points/Line Loop Count Total Time (ms)
2 100,000 25 1890
5 100,000 25 4670
2 1,000,000 10 7400
2 10,000,000 10 74000

These setups average out to about 0.37 usec per point. This is where the other packages outperformed SciChart, as their per point times kept improving.

I have tried to follow all of the performance tips I found on your website. I have included the code used to evaluate SciChart and would appreciate any help in improving the results.

Thanks,
Dave

Attachments
  • You must to post comments
Best Answer
1
0

Hi Dave,

Thank you for sending the sample over by email. I am investigating this now.

First up, I am able to reproduce the following test results on my laptop (which is Windows 10, Virtual Machine inside Macbook Pro 13, 10GB RAM, Dual core i7, e.g. not the fastest machine). I ran all tests in Release Mode / Any CPU without debugger attached.

Baseline

Line Count ~ Points/Line ~ Loop Count = Total Time (ms)

2 ~ 100,000 ~ 25 = 1145ms
5 ~ 100,000 ~ 25 = 2604 ms
2 ~ 1,000,000 ~ 10 = 3960 ms
2 ~ 10,000,000 ~ 10 = 41,509 ms

This is our baseline result. I get slightly faster results than you but I do see consistent results (10M points is considerably slower than 1M and 100k).

As a note, I tried the solution from https://www.scichart.com/questions/question/comparing-performance-for-a-fifo-chart-shows-direct3-slowest-and-high-quality-renderer-fastest-1 of setting render priority and it helped in some cases but not significantly.

Next step, I decided to profile the application using dotTrace

Profiler Results

I’ve run a profiler over your application choosing the parameters Line Count = 2, Points/Line = 1M, Loop Count = 25. This runs for approx 11,000 ms, enough time to get a result and analyse performance.

What I noticed is, actual drawing time is just 3% of the main thread. E.g. the UI thread is not busy at all. You are appending data on another thread (good), but I notice that the XyDataSeries.Append() is taking 40% of the time. also your append points method is taking 50% of the time.

[SCREENSHOT]

Something we notice immediately is that 10% of the time is spent by SciChart calculating the distribution of the data (e.g. sorted, unsorted, even, unevenly spaced). This performance hit is amplified because you are appending point-by-point.

You can avoid the cost of DataDistributionCalculator if you know in advance the distribution of your data like this:

    private IDataDistributionCalculator<double> _prefedinedDistributionCalculator = new UserDefinedDistributionCalculator
        <double>()
    {
        IsEvenlySpaced = true,
        IsSortedAscending = true
    };

XyDataSeries<double,double> dataSeries = new XyDataSeries<double, double>();
        dataSeries.DataDistributionCalculator = _prefedinedDistributionCalculator;

Here are the results after this one change:

With User Defined DataDistribution

Line Count ~ Points/Line ~ Loop Count = Total Time (ms)

2 ~ 100,000 ~ 25 = 972ms
5 ~ 100,000 ~ 25 = 1973ms
2 ~ 1,000,000 ~ 10 = 3109 ms
2 ~ 10,000,000 ~ 10 = 33,659 ms

e.g. Approximately 20% less time with this optimisation

With Initialising Capacity

DataSeries can be initialised with capacity up-front if you know how many data-points you want to append. For instance, with this line of code:

 XyDataSeries<double,double> dataSeries = new XyDataSeries<double, double>(dataCount);

We initialise the DataSeries memory. Appending should be slightly faster.

Line Count ~ Points/Line ~ Loop Count = Total Time (ms)

2 ~ 100,000 ~ 25 = 860ms
5 ~ 100,000 ~ 25 = 2065ms
2 ~ 1,000,000 ~ 10 = 3031ms
2 ~ 10,000,000 ~ 10 = 32,255 ms

Slightly … but nothing to write home about. This is still good practice if you know the size in advance.

With Appending Blocks

The biggest bottleneck on your profiler results is DataSeries.OnDataSeriesChanged. In this method we check if the data-series is suspended (locked) and if not, we tell the chart to redraw.

You are suspending your data series, which is good, but the simple check ‘IsSuspended’ is taking a considerable amount of time.

Something we recommend in our performance-tips-and-tricks is to append in blocks. The reason why is we need to perform calculations per point when appending data.

I re-wrote your appending data loop (appending point by point) from this

        for ( int i = 0; i < dataCount; i++ )
        {
            x = i * step;
            y = m_rnd.Next( MIN_Y, MAX_Y ) * scale + rangeBottom;
            dataSeries.Append( x, y );
        }

To this

        const int bufSize = 100;
        double[] xBuf = new double[bufSize];
        double[] yBuf = new double[bufSize];

        for (int i = 0, k = 0; i < dataCount; i += bufSize)
        {
            for (int j = 0; j < bufSize; j++, k++)
            {
                x = k* step;
                y = m_rnd.Next(MIN_Y, MAX_Y) * scale + rangeBottom;
                xBuf[j] = x;
                yBuf[j] = y;
            }

            dataSeries.Append(xBuf, yBuf);
        }

Semantically it is the same, just we are appending in blocks of 100 points now, which should minimise the impact of per-point thread synchronisation checks.

Line Count ~ Points/Line ~ Loop Count = Total Time (ms)

2 ~ 100,000 ~ 25 = 418 ms
5 ~ 100,000 ~ 25 = 792 ms
2 ~ 1,000,000 ~ 10 = 2,131 ms
2 ~ 10,000,000 ~ 10 = 14,028 ms

Result is we are now looking at a 60% – 70% reduction in time to append data.

With BufSize=1000

That same test repeated again with buffer size = 1000 (append in blocks of 1000)

Line Count ~ Points/Line ~ Loop Count = Total Time (ms)

2 ~ 100,000 ~ 25 = 297 ms
5 ~ 100,000 ~ 25 = 671 ms
2 ~ 1,000,000 ~ 10 = 1000 ms
2 ~ 10,000,000 ~ 10 = 9,456 ms

With block size = 1000, we are now looking at a 70% – 80% reduction in time to append data.

So, block size is kinda very, very important 😉

enter image description here

Re-running the profiler

This is what I see now.

[SCREENSHOT]

The amount of time spent appending is greatly reduced, drawing is still not overloaded (only 10% of UI main thread time is spent drawing).

I think we can optimise this further but that will involve making code changes. So question to you is … is it good enough? What results do you see with other chart providers?

Best regards,
Andrew

  • You must to post comments
0
0

Hi Dave,

Unfortunately there is a bug in our forum (being fixed at present) whereby we cannot download attachments … Can I ask you please to submit this sample to support [at] scichart.com or to myself andrew [at] abtsoftware.co.uk

Secondly, I’m surprised we’re not meeting performance of other chart packages! Since we advertise ourselves and pride ourselves on performance, I’m really keen to improve this for you. If you can send over the zip by email we’ll take a look right away.

Finally, you should be aware that there are many tricks involved in charting performance. We had one customer report that DirectX SciChart was slower to append but the problem was actually that the DirectX drawing engine was so fast, it was drawing too often and blocking the DataSeries.Append. Take a look at https://www.scichart.com/questions/question/comparing-performance-for-a-fifo-chart-shows-direct3-slowest-and-high-quality-renderer-fastest-1 which has a detailed analysis and resolution (which might be applicable)

Best regards,
Andrew

  • Dave Leach
    Andrew, I have emailed the zipped code files to you (andrew@abtsoftware.co.uk).
  • You must to post comments
0
0

Andrew,

I implemented the following suggested optimization changes:

  • Added the DataDistributionCalculator to the XyDataSeries.
  • Used a block size of 1000 for appending data (if the data count is >= 10000).
  • Constructed the XyDataSeries object with the known data count.

With these changes the SciChart is now comparable to the performance of the fastest of the other graphing packages I have tried. SciChart is actually significantly faster for certain series count , points per series configurations.

Thanks for your help,
Dave

  • Dave Leach
    Measured performance with optimizations:Line Count Points/Line Loop Count Total Time (ms) 2 ~ 100,000 ~ 25 = 330 5 ~ 100,000 ~ 25 = 835 2 ~ 1,000,000 ~ 10 = 970 2 ~ 10,000,000 ~ 10 = 10300
  • You must to post comments
Showing 3 results
Your Answer

Please first to submit.