Pre loader

Remove data that renders to the same pixel

Welcome to the SciChart Forums!

  • Please read our Question Asking Guidelines for how to format a good question
  • Some reputation is required to post answers. Get up-voted to avoid the spam filter!
  • We welcome community answers and upvotes. Every Q&A improves SciChart for everyone

WPF Forums | JavaScript Forums | Android Forums | iOS Forums


I am using the SciChart control to render real-time data as it arrives from a data source. This source could be producing data many times per second. New values are appended to the series. If I leave my application running for a long time, the data series gets very long and rendering slows down. I can remove data points from the series when they are older than the VisibleRange and that certainly helps.

My problem is that the user might set the VisibleRange to span several days. If the data source is sending data at 10Hz then that would be 6 million data values per week, redrawing at 10Hz, which I guess will cause some CPU loading. I would like to be able to remove intermediate data from the series if new data being appended to the series would render in the same horizontal pixel as the previous data and is neither the maximum nor minimum Y value at that pixel. In order to do that I need to be able to examine the DateTime of the incoming value and determine its X pixel, then walk back through the existing data to see if it would be visible if rendered (in a new X pixel, or a new min or max in the same pixel as the previous value).

So, finally, to the question: Is there a method for determining the X,Y pixel coordinate for a give X and Y value?

  • You must to post comments

Hmm, interesting problem! …

SciChart has a built-in resampling engine which performs a similar calculation, and it is enabled by default. The mode is FastLineRenderableSeries.ResamplingMode=MinMax. What this does is aggregates points which are too small to be discernable by the naked eye and omits certain features when drawing. It’s really high speed and is able to sample a data-set of 10M points to <10k points in 150ms.

In order to improve performance further, I have a few tips for you.

  • Use FastLineRenderableSeries instead of other (slower) types such as Mountain Series
  • Use StrokeThickness = 1, which is significantly faster than 2 or above
  • Set RenderableSeries.AntiAlias = false, which improves drawing speed further
  • Use a lower-quality but faster ResamplingMode, e.g. Mid, Min or Max. These are optimized to preserve the Median, Min or Max features only, whereas MinMax preserves min and maximum.
  • Set SciChartSurface.RenderPriority = RenderPriority.Low, which puts the priority of drawing slightly below mouse input (resolves issues such as stuttering under high load

If you still want to know about our API to convert pixels to data values and back, then yes we have one. You need to call:

// Retrieve a CoordinateCalculator valid for the current render pass and transform coordinates. 
var coordinateCalculator = sciChart.XAxis.GetCurrentCoordinateCalculator();
double pixel = coordinateCalculator.GetCoordinate(dataValue);
double data = coordinateCalculator.GetDataValue(pixel);

Best regards,

  • asthomas
    Thank you. GetCurrentCoordinateCalculator does what I need. I realize that you have already implemented ResamplingMode, but that is too late for me. It is applied at render time, whereas I want to do the resampling on the raw data. In a long-running real-time trend chart the number of data values in the series in unbounded and so the raw data must be limited in size. I realize that this produces a low-fidelity raw data set, but that can be handled by re-reading the history from a data historian when the user changes the window size or visible range. If you are considering implementing helpers for virtualized data sets, that's one thing to bear in mind - the raw data set may need to be resampled, not just the rendered values.
  • You must to post comments

You could also try this. I created a short sample (run in a Console window) which creates an XyDataSeries and resamples it to a PointSeries (used internally by SciChart immediately prior to rendering, when resampling is enabled).

I managed to get this up to about 64,000,000 points appended in the DataSeries before I got an OutOfMemoryException. There’s undoubtedly some optimization we could do here. Internally DataSeries are using a doubling-algorithm (similar to List<T>) which means if you go over the capacity it will ask for double the memory in both X and Y columns to append data. Over 64M it’ll need enough memory to store 128M*2 floats, which is around 1GB memory. Throw in a bit of heap fragmentation its easy to see how we get an OutOfMemoryException (exceeding the 1.5GByte per process limit in Win32) at this level.

After appending the points, I use the method IDataSeries.ToPointSeries which is used internally to resample the data into a viewport representation prior to rendering.

I imagine if you wanted to, you could use an XyDataSeries to stream out your data from the Database in order to create low-fidelity datasets for the overview control to help your virtualization. You’d have to do it in chunks between 1E7 and 1E8 points depending on the memory in your system.

Anyway I hope this is helpful, either to you, or to someone else. This discussion (and code sample) have also given me some good ideas to assist users in virtualization cases, by making the API more efficient and easy to use for this (often requested) use case!

Best regards,

// Sample to demonstrate how to use XyDataSeries to resample blocks of data
// (e.g. 64M data-points) into a smaller dataset, in order to provide an overview
// when virtualising large sets. 
// On my i7 Notebook the append takes some time, but the resampling completes in less
// than 250ms. The Append performance was improved considerably by appending in blocks
// as opposed to point-by-point
// Finally the output - resampled Point-Series - is a double-representation of the input X-Y data which is used for 
// drawing in SciChart. 
var dataSeries = new XyDataSeries<int, float>();

                int bufferSize = 1000;
                int[] xBuffer = new int[bufferSize];
                float[] yBuffer = new float[bufferSize];

                for (int i = 0; i < 6.4E7; )
                    for (int j = 0; j < bufferSize; j++, i++)
                        xBuffer[j] = i;
                        yBuffer[j] = (float)Math.Sin(i * 0.1);
                    // Append point is slow, use Append on a re-useable buffer
                    dataSeries.Append(xBuffer, yBuffer);

                var stopwatch = Stopwatch.StartNew();

                const int viewportWidth = 1000;
                // Converts the default IDataSeries to a resampled IPointSeries which is used to render XY series
                // Note at this point all values are represented by doubles, e.g. DateTimes are converted to Ticks and cast to double
                // integers are cast to double. So each Point X,Y is the double-representation of your input series
                var resampled = dataSeries.ToPointSeries(ResamplingMode.MinMax, new IndexRange(0, dataSeries.Count - 1), viewportWidth, false);
                MessageBox.Show(string.Format("Resampled {0} points into {1} output points in {2}ms", dataSeries.Count,
                                              resampled.Count, stopwatch.ElapsedMilliseconds));
            catch (OutOfMemoryException)
                MessageBox.Show("Out of memory exception!");
  • asthomas
    I took a different approach to this. Since the database query is expensive, ideally I don't want to read the entire data set in the first place, so the database query is a min/max query that produces only as many min/max pairs as there are horizontal pixels in the chart, regardless of the data set size. This keeps the initial database query size very manageable. The next problem is how to cull the real-time data as it arrives. We could have a chart with a 1-month X axis, showing 10Hz data - 27 million points per series. As the application runs continuously through the month the data series will grow, and the memory usage of the app will act as if there is a memory leak. Obviously most points in a long series will never be drawn. At each horizontal pixel a maximum of 4 points will matter: the point of arrival at the pixel, the minimum, the maximum and the point of departure of the pixel. Any time I add a value to the data series, I look back 4 points to see if it is drawn in the same pixel as the new value would be. If so, I find the middle value of the next three points and delete it, then add the new value to the end of the series. The effect is to limit the steady-state length of the series to 4 x XPixels, regardless of incoming data volume. I guess most of the time this kind of algorithm is not relevant, but for long-lived processes with real-time additive data it allows the application to run continuously without exhausting memory. I know you're thinking that 1 month of 10Hz data is not particularly informative on a raster display, but we've learned that users will push the boundaries of any product much further than we ever intended. We actually had a user try to plot 2 years of 1Hz data.
  • Andrew Burnett-Thompson
    I guess most of the time this kind of algorithm is not relevant, but for long-lived processes with real-time additive data it allows the application to run continuously without exhausting memory. I know you’re thinking that 1 month of 10Hz data is not particularly informative on a raster display, but we’ve learned that users will push the boundaries of any product much further than we ever intended. We actually had a user try to plot 2 years of 1Hz data.
    Very interesting, yes I know all about this! ;-)
  • You must to post comments
Showing 2 results
Your Answer

Please first to submit.