Pre loader

Changing visible range redraws the chart in a weird way

Welcome to the SciChart Forums!

  • Please read our Question Asking Guidelines for how to format a good question
  • Some reputation is required to post answers. Get up-voted to avoid the spam filter!
  • We welcome community answers and upvotes. Every Q&A improves SciChart for everyone

WPF Forums | JavaScript Forums | Android Forums | iOS Forums

Answered
1
0

I am considering applying server-side licensing for my javerScript application.

In the document below, there is a phrase “Our server-side licensing component is written in C++.”
(https://support.scichart.com/index.php?/Knowledgebase/Article/View/17256/42/)

However, there is only asp.net sample code on the provided github.
(https://github.com/ABTSoftware/SciChart.JS.Examples/tree/master/Sandbox/demo-dotnet-server-licensing)

I wonder if there is a sample code implemented in C++ for server-side licensing.

Can you provide c++ sample code?
Also, are there any examples to run on Ubuntu?

Version
3.2.464
Attachments
  • You must to post comments
Best Answer
0
0

Hi there,

I created a Codepen to reproduce this here.

You’re doing nothing wrong in code, and you are setting visibleRange correctly.

The reason you are seeing this is because of aliasing. There are 50,000 points in your example with random amplitude from 0.0 – 1.0 and mountain fill on a chart which has maybe 1,000 – 2,000 pixels width. So approx 25-50 random peaks per pixel.

SciChart is hardware accelerated (using WebGL and the GPU for rendering) and on graphics devices, any triangle that is smaller than the size of a pixel gets culled automatically by the GPU. So, as the chart scrolls, you can see sometimes a peak which is brought into view and on the next draw after being shifted by a fraction of a pixel it is no longer visible.

To summarise: the choice of data here (very noisy) is resulting in a noisy output.

This sample and codepen is improved greatly if you use strokeThickness: 1 (not 0.1). I suggest using opacity instead on stroke if you want a softer outline to the mountain series. Also datasets which are less random should appear much smoother as they are scrolled. For example: this random walk dataset:

  // Random data results in very thin triangles which could be < 1 pixel size and cause aliasing
  // const array = new Array(50_000).fill(0).map(val => Math.max(Math.random(), 0));

  // Random walk test 
  const array = [];
  array.push(0);
  for(let i = 1; i < 50000; i++) {
    array.push(array[i-1] + (Math.random() - 0.5) * 0.01);
  }
  const array2 = new Array(50000).fill(0).map((val, index) => index);

  const dataSeries = new XyDataSeries(wasmContext)
  dataSeries.appendRange(array2, array);

  const series = new FastMountainRenderableSeries(wasmContext, {
    stroke: '#00000033',
    fill: '#005B96',
    strokeThickness: 1, // Don't set strokeThickness: 0.1, try 1.0 and opacity on stroke or 0.0
    zeroLineY: 0,
    dataSeries,
  })

looks much nicer when scrolled right to left.

Let me know if this helps,

Best regards
Andrew

  • You must to post comments
0
0

Thank you for your answer.

I noticed this doesn’t happen when scrolling with the mouse. If you disable the interval changing the visible range and scroll (in any speed) using the mouse there’s no culling, its completely smooth.

How is that possible?

Thanks!

  • Andrew Burnett-Thompson
    I think its because you’re scrolling one point at a time in code. 1/50,000 * number of pixels on screen (assume 1000) = approx 0.02 pixels per event. Vs the mouse is always a round number of pixels scrolled. These artefacts suck, but they are a result of the way we’re drawing (using GPU) and the trade-off of performance vs. quality. For example SVG can achieve sub-pixel rendering and will never cull an object for being smaller than a pixel, but by comparison to WebGL it has terrible performance. Did you try the strokeThickness: 1 ? That improved it a lot on my machine.
  • haba haba
    Ah, that makes sense. I have tried strokeThickness 1 and even 3. It improved it dramatically, but unfortunately for my project I need the level of detail 0.1 gives me. Also, my dataset is extremely “noisy”, which is problematic. Anyway, while this sucks just a tiny bit, its a cheap price for the performance I get. I had used highcharts for my project and while the chart quality was good, its performance was absolutely not. So…. lets go Scichart!!!! Haha Thank you very much!
  • Andrew Burnett-Thompson
    I get it, I wish we could do sub-pixel rendering *and* super high speed :) strokeThickness 0.1 really is a ‘null’ command to scichart, we create a texture or ‘Pen’ to draw the stroke on the GPU. All geometry is made up by triangles. If a triangle is smaller than a pixel, the GPU simply skips it (have you ever noticed this effect in games? https://blog.codinghorror.com/content/images/uploads/2011/12/6a0120a85dcdae970b015437fee512970c-800wi.jpg (see left image) One solution is full screen anti-aliasing, we can render the entire chart to 2x or 4x the size then scale it down. This can provide an approximation of sub-pixel rendering when using WebGL.
  • You must to post comments
0
0

thank you

  • You must to post comments
Showing 3 results
Your Answer

Please first to submit.