Digital Storage Scopes 'Record Length'


tontaub
 

Hi, I realise I know virtually nothing about contemporary storage scopes.
The last storage scope I had at hands was analogue and that was some 35 yrs ago. :-/
Anyway, if a datasheet speaks of 2.5kpts at all time bases, is that equal to 2500 sampled values?
I take it the amount of stored signal depends on the time base setting, but I can't tell how much that is?
How does the sample rate influence that?
I wold like to know how a slow a signal may be and how that relates to what I can see on the scope.
Thanks, Michael.

--


 

Hi Michael,

On Thu, Aug 15, 2013 at 2:19 PM, <egroups@...> wrote:
Hi, I realise I know virtually nothing about contemporary storage scopes.
The last storage scope I had at hands was analogue and that was some 35 yrs ago. :-/
Anyway, if a datasheet speaks of 2.5kpts at all time bases, is that equal to 2500 sampled values?
I take it the amount of stored signal depends on the time base setting, but I can't tell how much that is?
How does the sample rate influence that?
I wold like to know how a slow a signal may be and how that relates to what I can see on the scope.
Thanks, Michael.
You have 2500 buckets to fill, standing on the wall. The more often
you take one down, pass it around, the sooner they'll all be gone. The
rate at which you do this, i.e. how often you sample, is called the
sampling rate. The higher your sampling rate, the more often samples
are taken, and the higher the bandwidth of the measurement due to the
Nyquist theorem which states that you can only encode frequencies up
to one half the sampling frequency (rate). I don't even know why
people cling to the concept of time bases in sampling scopes, turning
the knob just changes the sampling rate.

To calculate how long you have until the buckets are all gone, divide
your memory size by your sampling rate. I.e. if you have 2500 points,
and your sampling rate is 5 kHz, then that's 2500/5000 [1/Hz] = 0.5
[sec]. 2500 kHz is not a whole lot if you think in terms of time.
However, it's very likely to be able to characterize a single period
of the wave being sampled. Sampling oscilloscopes only work for
repeating waveforms anyways, unless you do single-shot mode and accept
a very heavy bandwidth penalty.

Cheers,
D.


Henrik Olsson <henrik@...>
 


 

On Thu, 15 Aug 2013 14:19:47 +0200, you wrote:


Anyway, if a datasheet speaks of 2.5kpts at all time bases, is that equal to 2500 sampled values?
That is exactly what that means.

I take it the amount of stored signal depends on the time base setting, but I can't tell how much that is?
The record length is fixed at a specific number of samples but the
sample rate is variable depending on the timebase setting.

A DSO like a TDS1012 or TDS2012 has a record length of 2500 samples
and displays 10 divisions of 250 samples each. The duration of time
acquired is 10 times the timebase setting (10 divisions) and the
sample rate is 250 samples per division divided by the timebase
setting. Using dimensional analysis:

250 samples/div divided by 1s/div = 250 samples / 1 second
total time captured is 1s/div * 10 divisions = 10 seconds

250 samples/div divided by 1us/div = 250 samples / 1 microsecond
250 samples/div divided by 1us/div = 250 million samples per second
total time captured is 1us/div * 10 divisions = 10 microseconds

How does the sample rate influence that?
It does not. The sample rate is set by the timebase setting and the
number of samples per division.


tontaub
 

Hi D., Henrik, David,

so apparently it looks like that, for instance, examining a short glitch (some ms) in a
very slow signal (seconds) is most likely not possible in a straight forward manner.
Either the anti-aliasing-filter would smooth it out and/or it simply would fall
between two samples.
So the only chance would be a single shot measurement, possibly triggered by this
error event (glitch) and maybe with some sort of pre-recording to get the complete
section of the distortion.
Either way I can't see it at all nor I can determine the exact position of it
in order to track down its cause.
Do Tek scopes provide anything that would be helpful in that way?

Thanks, Michael.

On 15.8.2013, at 17:51 , cheater00 . wrote:

2500 kHz is not a whole lot if you think in terms of time.
However, it's very likely to be able to characterize a single period
of the wave being sampled. Sampling oscilloscopes only work for
repeating waveforms anyways, unless you do single-shot mode and accept
a very heavy bandwidth penalty.

On 15.8.2013, at 18:23 , Henrik Olsson wrote:

Generally speaking the scope selects a sample rate so that the memory
will last for the duration of the "sweep". Ie, if you select 1ms/div
and there's 10 divisions on your screen the memory will have to last
for 10ms so the scope selects a sample rate of 250k samples per second.
On 15.8.2013, at 18:30 , David wrote:

Using dimensional analysis:

250 samples/div divided by 1s/div = 250 samples / 1 second
total time captured is 1s/div * 10 divisions = 10 seconds

250 samples/div divided by 1us/div = 250 samples / 1 microsecond
250 samples/div divided by 1us/div = 250 million samples per second
total time captured is 1us/div * 10 divisions = 10 microseconds


Don Black <donald_black@...>
 

I've just bought a cheap DSO to connect to my computer via USB and am still learning about it. I'll make the following comments and hope someone really familiar with DSOs will correct any wild flights of fancy, It seems the little scope can store hundred(s?) of M Bytes of data.

I assume you mean a digital storage scope, not a sampling scope that needs a repetitive signal. 2500 samples are very few by todays technology, I guess you're talking of an era when memory was scarce and expensive. With a modern digital scope you can store a long stream of data and should find the glitch.

Don Black.

On 17-Aug-13 8:11 PM, egroups@... wrote:
 

Hi D., Henrik, David,

so apparently it looks like that, for instance, examining a short glitch (some ms) in a
very slow signal (seconds) is most likely not possible in a straight forward manner.
Either the anti-aliasing-filter would smooth it out and/or it simply would fall
between two samples.
So the only chance would be a single shot measurement, possibly triggered by this
error event (glitch) and maybe with some sort of pre-recording to get the complete
section of the distortion.
Either way I can't see it at all nor I can determine the exact position of it
in order to track down its cause.
Do Tek scopes provide anything that would be helpful in that way?

Thanks, Michael.

On 15.8.2013, at 17:51 , cheater00 . wrote:

> 2500 kHz is not a whole lot if you think in terms of time.
> However, it's very likely to be able to characterize a single period
> of the wave being sampled. Sampling oscilloscopes only work for
> repeating waveforms anyways, unless you do single-shot mode and accept
> a very heavy bandwidth penalty.

On 15.8.2013, at 18:23 , Henrik Olsson wrote:

> Generally speaking the scope selects a sample rate so that the memory
> will last for the duration of the "sweep". Ie, if you select 1ms/div
> and there's 10 divisions on your screen the memory will have to last
> for 10ms so the scope selects a sample rate of 250k samples per second.

On 15.8.2013, at 18:30 , David wrote:

> Using dimensional analysis:
>
> 250 samples/div divided by 1s/div = 250 samples / 1 second
> total time captured is 1s/div * 10 divisions = 10 seconds
>
> 250 samples/div divided by 1us/div = 250 samples / 1 microsecond
> 250 samples/div divided by 1us/div = 250 million samples per second
> total time captured is 1us/div * 10 divisions = 10 microseconds



Henrik Olsson <henrik@...>
 


 

There are two reasons that recording and displaying a glitch shorter
than the sample rate is not a problem:

1. There is generally no antialiasing filter. Rise time is a critical
parameter for a time domain device like an oscilloscope and having it
change with sample rate would cause more problems than it would solve.
An antialiasing filter would also interfere with and defeat the
purpose of . . .

2. Any good DSO will have a glitch or peak detect mode where the
digitizer sample rate is maximum at every sweep speed. Decimation is
then used to lower the sample rate to what will fit within the record
length but instead of selecting a single sample to store, the minimum
and maximum values from a set of samples is stored so any glitch or
noise will be recorded and shown. Since two values are stored, the
record length is halved.

To give a concrete example, a 2230 in peak detect mode will record and
show glitches as narrow as 100 nanoseconds no matter how slow the
sweep speed is. A 2232 will do the same down to 10 nanoseconds and a
2430 will do the same down to 2 to 8 nanoseconds. Almost any
oscilloscope designed after those will do the same although sometimes
that feature gets left out like with some recent Rigols.

Of course after recording a glitch, where it occurred is only recorded
to a resolution limited by the decimated sample rate so if your record
length is 2500 points total and 250 points per division and the sweep
speed is 5 seconds per division for a recorded sample rate of 50
points per second, then you would only know where the 10 nanosecond
glitch occurred to within 40 milliseconds. It is 40 milliseconds and
20 not milliseconds because storing both a minimum and maximum value
for a given time period effectively halves the record length.

If you want more time resolution in that situation, then you need a
longer record length and many oscilloscopes provide that.

On Sat, 17 Aug 2013 12:11:00 +0200, you wrote:

Hi D., Henrik, David,

so apparently it looks like that, for instance, examining a short glitch (some ms) in a
very slow signal (seconds) is most likely not possible in a straight forward manner.
Either the anti-aliasing-filter would smooth it out and/or it simply would fall
between two samples.
So the only chance would be a single shot measurement, possibly triggered by this
error event (glitch) and maybe with some sort of pre-recording to get the complete
section of the distortion.
Either way I can't see it at all nor I can determine the exact position of it
in order to track down its cause.
Do Tek scopes provide anything that would be helpful in that way?

Thanks, Michael.

On 15.8.2013, at 17:51 , cheater00 . wrote:

2500 kHz is not a whole lot if you think in terms of time.
However, it's very likely to be able to characterize a single period
of the wave being sampled. Sampling oscilloscopes only work for
repeating waveforms anyways, unless you do single-shot mode and accept
a very heavy bandwidth penalty.

On 15.8.2013, at 18:23 , Henrik Olsson wrote:

Generally speaking the scope selects a sample rate so that the memory
will last for the duration of the "sweep". Ie, if you select 1ms/div
and there's 10 divisions on your screen the memory will have to last
for 10ms so the scope selects a sample rate of 250k samples per second.
On 15.8.2013, at 18:30 , David wrote:

Using dimensional analysis:

250 samples/div divided by 1s/div = 250 samples / 1 second
total time captured is 1s/div * 10 divisions = 10 seconds

250 samples/div divided by 1us/div = 250 samples / 1 microsecond
250 samples/div divided by 1us/div = 250 million samples per second
total time captured is 1us/div * 10 divisions = 10 microseconds


tontaub
 

Hi Don,
no, in particular I was referring to something like a TDS2014C:
http://www.tek.com/oscilloscope/tds2000-digital-storage-oscilloscope
And I agree, this small amount of data storage made me wonder, also because of this I started this thread.
But David made it clear with the examples he showed how this is intended to work.
To find a glitch like I tried to describe in a straight forward fashion by sampling the entire signal it would take something like
ten times this amount of storage.

[2ms error seen up to the 10th harmonic (i.e. to quantify detail in the time domain) gives 1/0.002 * 10 * 2 = 10kHz Sampling rate. So for a 2s signal I'd need 10k*2 = 20000 samples at least]
I see there are products w/ 20kpts and 100kpts/ch at the same price range of a TDS2014C from a direct competitor of Tek - but I wonder what other implications that products have.

Michael.

--- In TekScopes@..., Don Black <donald_black@...> wrote:

I assume you mean a digital storage scope, not a sampling scope that
needs a repetitive signal. 2500 samples are very few by todays
technology, I guess you're talking of an era when memory was scarce and
expensive. With a modern digital scope you can store a long stream of
data and should find the glitch.


Don Black <donald_black@...>
 

Fair enough Michael but you asked if there was something else available that would find your glitch and I was just suggesting a later scope could do that. Your analysis of the problem is worth while and I don't want to doubt it. Hope you find a way to resolve your problem with the equipment you have available.

Don Black.
PS As I said, I'm no expert on DSOs and hope I can learn something from it too. I have enough faith in this forum that if I say something stupid someone knowledgeable will correct it.

On 17-Aug-13 10:10 PM, tontaub wrote:
 

Hi Don,
no, in particular I was referring to something like a TDS2014C:
http://www.tek.com/oscilloscope/tds2000-digital-storage-oscilloscope
And I agree, this small amount of data storage made me wonder, also because of this I started this thread.
But David made it clear with the examples he showed how this is intended to work.
To find a glitch like I tried to describe in a straight forward fashion by sampling the entire signal it would take something like
ten times this amount of storage.

[2ms error seen up to the 10th harmonic (i.e. to quantify detail in the time domain) gives 1/0.002 * 10 * 2 = 10kHz Sampling rate. So for a 2s signal I'd need 10k*2 = 20000 samples at least]
I see there are products w/ 20kpts and 100kpts/ch at the same price range of a TDS2014C from a direct competitor of Tek - but I wonder what other implications that products have.

Michael.

--- In TekScopes@..., Don Black wrote:

> I assume you mean a digital storage scope, not a sampling scope that
> needs a repetitive signal. 2500 samples are very few by todays
> technology, I guess you're talking of an era when memory was scarce and
> expensive. With a modern digital scope you can store a long stream of
> data and should find the glitch.



tontaub
 

Hi David,

--- In TekScopes@..., David <davidwhess@...> wrote:

There are two reasons that recording and displaying a glitch shorter
than the sample rate is not a problem:

1. There is generally no antialiasing filter. Rise time is a
critical parameter for a time domain device like an oscilloscope
and having it change with sample rate would cause more problems
than it would solve.
An antialiasing filter would also interfere with and defeat the
purpose of . . .
aaah, I really wondered _if_ there would be something like an aa-filter - I'd have preferred to see the raw samples
and drawn my conclusions myself seeing the data.

2. Any good DSO will have a glitch or peak detect mode where the
digitizer sample rate is maximum at every sweep speed.
Decimation is
then used to lower the sample rate to what will fit within the
record length but instead of selecting a single sample to store,
the minimum and maximum values from a set of samples is stored so
any glitch or noise will be recorded and shown. Since two values
are stored, the record length is halved.
now we are talking!
that sounds much more like instrumentation engineering! ;-)
Since the peak detection will take time as well is there something like a pre-record functionality to start the hi-res mode prior to the actual point of detection? otherwise it might happen that the event of error gets cut off.
Or is this just in theory because the circuits are fast enough anyway
related to the signal being scrutinised?

To give a concrete example, a 2230 in peak detect mode will record
and show glitches as narrow as 100 nanoseconds no matter how slow
the sweep speed is. A 2232 will do the same down to 10 nanoseconds
and a 2430 will do the same down to 2 to 8 nanoseconds. Almost any
oscilloscope designed after those will do the same although
sometimes that feature gets left out like with some recent Rigols.
Of course after recording a glitch, where it occurred is only
recorded to a resolution limited by the decimated sample rate so if
your record length is 2500 points total and 250 points per division
and the sweep speed is 5 seconds per division for a recorded sample
rate of 50 points per second, then you would only know where the 10
nanosecond glitch occurred to within 40 milliseconds. It is 40
milliseconds and 20 not milliseconds because storing both a minimum > and maximum value for a given time period effectively halves the
record length.
it appears to me that just adding memory is more like a 'brute force approach' to get the data and it's probably cheaper than developing a good peak detector. Than again that's been around for decades anyway.
However, I remember the Tonmeisters at our institute have a decent DSO (MSO7054A) - I'll have a look at it to see what such a high profile device provides. ;-)

If you want more time resolution in that situation, then you need a
longer record length and many oscilloscopes provide that.
yup, but regarding my examples that should pretty much do it.
Thanks a lot for that explanation,
m.

On Sat, 17 Aug 2013 12:11:00 +0200, you wrote:

Hi D., Henrik, David,

so apparently it looks like that, for instance, examining a short glitch (some ms) in a
very slow signal (seconds) is most likely not possible in a straight forward manner.
Either the anti-aliasing-filter would smooth it out and/or it simply would fall
between two samples.
So the only chance would be a single shot measurement, possibly triggered by this
error event (glitch) and maybe with some sort of pre-recording to get the complete
section of the distortion.
Either way I can't see it at all nor I can determine the exact position of it
in order to track down its cause.
Do Tek scopes provide anything that would be helpful in that way?

Thanks, Michael.

On 15.8.2013, at 17:51 , cheater00 . wrote:

2500 kHz is not a whole lot if you think in terms of time.
However, it's very likely to be able to characterize a single period
of the wave being sampled. Sampling oscilloscopes only work for
repeating waveforms anyways, unless you do single-shot mode and accept
a very heavy bandwidth penalty.

On 15.8.2013, at 18:23 , Henrik Olsson wrote:

Generally speaking the scope selects a sample rate so that the memory
will last for the duration of the "sweep". Ie, if you select 1ms/div
and there's 10 divisions on your screen the memory will have to last
for 10ms so the scope selects a sample rate of 250k samples per second.
On 15.8.2013, at 18:30 , David wrote:

Using dimensional analysis:

250 samples/div divided by 1s/div = 250 samples / 1 second
total time captured is 1s/div * 10 divisions = 10 seconds

250 samples/div divided by 1us/div = 250 samples / 1 microsecond
250 samples/div divided by 1us/div = 250 million samples per second
total time captured is 1us/div * 10 divisions = 10 microseconds


 

On Sat, Aug 17, 2013 at 12:11 PM, <egroups@...> wrote:
Hi D., Henrik, David,

so apparently it looks like that, for instance, examining a short glitch (some ms) in a
very slow signal (seconds) is most likely not possible in a straight forward manner.
Either the anti-aliasing-filter would smooth it out and/or it simply would fall
between two samples.
So the only chance would be a single shot measurement, possibly triggered by this
error event (glitch) and maybe with some sort of pre-recording to get the complete
section of the distortion.
Either way I can't see it at all nor I can determine the exact position of it
in order to track down its cause.
Do Tek scopes provide anything that would be helpful in that way?

Thanks, Michael.
You can use two scopes, one set to record slow data, one set to record
the glitch.

There's a split-screen storage mainframe which is coming to me
sometime in the near future, I wonder if that would be able to perform
a trick - can you use the 7D20 in such a way that the lower part shows
the recorded data with little storage whereas the top part has long
storage and records the live data in analog during sweep using the
second time base? I'll try it out once I have that working and a 7D20.

Cheers,
D.


tontaub
 

Hi Henrik,

That's correct. The most obvious way to do this is of course to trigger
the scope on the glitch - if you can set that up. A DSO is not like an
analog scope where the sweep starts when trigger event happens. On a
digital storage scope the waveform data continously "streams" thru the
memory (as long as you haven't stopped the acquisition of course) so
when the trigger "fires" what happened before the trigger event is
already in memory.
ah, thanks, that answers my question regarding a 'pre-record functionality'.

The memory then continues to fill untill the trigger event is in the
"center" of the memory so what you get on the screen is as much waveform
data before the trigger as after. I think that's the norm anyway but I
hope someone will correct me if I've got it wrong.
 
However, if you must capture the whole sequence of several seconds AND
be able to see the glitch in detail then you need more memory. By todays
standard the 2.5kpts you mentioned really is not much. My Rigol DS4000
for example has 140Mpts (yes, 140 millions points) of waveform memory.
So, I can sample at say 10M samples per second (ie. one sample every
100ns) and still record 14 seconds worth of waveform data - the 2.5kpts
would only last 250us at that samplerate.
put this way it sounds nice to have that much memory.
then again, is there any aid in the user interface of your scope to prevent you endlessly scrolling through the data?
I'd guess it's about knowing what you're looking for and getting a proper setup in any case.

m.

 
On 17 aug 2013 12:11 <egroups@...> wrote:
Hi D., Henrik, David,
so apparently it looks like that, for instance, examining a short
glitch (some ms) in a
very slow signal (seconds) is most likely not possible in a straight
forward manner.
Either the anti-aliasing-filter would smooth it out and/or it simply
would fall
between two samples.
So the only chance would be a single shot measurement, possibly
triggered by this
error event (glitch) and maybe with some sort of pre-recording to get
the complete
section of the distortion.
Either way I can't see it at all nor I can determine the exact
position of it
in order to track down its cause.
Do Tek scopes provide anything that would be helpful in that way?
Thanks, Michael.
On 15.8.2013, at 17:51 , cheater00 . wrote:
2500 kHz is not a whole lot if you think in terms of time.
However, it's very likely to be able to characterize a single period
of the wave being sampled. Sampling oscilloscopes only work for
repeating waveforms anyways, unless you do single-shot mode and
accept
a very heavy bandwidth penalty.
On 15.8.2013, at 18:23 , Henrik Olsson wrote:
Generally speaking the scope selects a sample rate so that the
memory
will last for the duration of the "sweep". Ie, if you select 1ms/div
and there's 10 divisions on your screen the memory will have to last
for 10ms so the scope selects a sample rate of 250k samples per
second.
On 15.8.2013, at 18:30 , David wrote:
Using dimensional analysis:
250 samples/div divided by 1s/div = 250 samples / 1 second
total time captured is 1s/div * 10 divisions = 10 seconds
250 samples/div divided by 1us/div = 250 samples / 1 microsecond
250 samples/div divided by 1us/div = 250 million samples per second
total time captured is 1us/div * 10 divisions = 10 microseconds
------------------------------------
Yahoo! Groups Links
Individual Email | Traditional
<http://docs.yahoo.com/info/terms/>


PA4TIM
 

I have a 7D11, I have played with it sometimes and I still must figure out how to use it correct (same for the 7D15 I have, I tried that with the manual but I do not understand it or it has a problem. It is probably me because I bought it from Jerry who tested and calibrated all plugijn I bought from him and the other ones all are indeed perfect. Maybe it has to do with some internal or external switch settings of the 7704 (and I'm digital retarded ;-) )
Fred


You can use two scopes, one set to record slow data, one set to record
the glitch.

There's a split-screen storage mainframe which is coming to me
sometime in the near future, I wonder if that would be able to perform
a trick - can you use the 7D20 in such a way that the lower part shows
the recorded data with little storage whereas the top part has long
storage and records the live data in analog during sweep using the
second time base? I'll try it out once I have that working and a 7D20.

Cheers,
D.


 

I was thinking of the TDS2000C series when I used examples of a 2500
point record length. The 2230 or 2232 only has a record length of
1000 or 4000 points. That is short by today's standards but with
delayed sweep and peak detection support a short record length is not
as limiting as it seems.

The reason those Tektronix oscilloscopes have such short record
lengths is that they use the embedded memory in an FPGA or similar
without external SRAM. It is less expensive, actually it is close to
free, to add delayed sweep support than it is to add external fast
SRAM. Even the TDS3000C series only has a 10k point record length.

I would rather have fast waveform acquisition rates than long record
lengths and the two can be somewhat at odds.

On Sat, 17 Aug 2013 12:10:41 -0000, you wrote:

Hi Don,
no, in particular I was referring to something like a TDS2014C:
http://www.tek.com/oscilloscope/tds2000-digital-storage-oscilloscope
And I agree, this small amount of data storage made me wonder, also because of this I started this thread.
But David made it clear with the examples he showed how this is intended to work.
To find a glitch like I tried to describe in a straight forward fashion by sampling the entire signal it would take something like
ten times this amount of storage.

[2ms error seen up to the 10th harmonic (i.e. to quantify detail in the time domain) gives 1/0.002 * 10 * 2 = 10kHz Sampling rate. So for a 2s signal I'd need 10k*2 = 20000 samples at least]
I see there are products w/ 20kpts and 100kpts/ch at the same price range of a TDS2014C from a direct competitor of Tek - but I wonder what other implications that products have.

Michael.

--- In TekScopes@..., Don Black <donald_black@...> wrote:

I assume you mean a digital storage scope, not a sampling scope that
needs a repetitive signal. 2500 samples are very few by todays
technology, I guess you're talking of an era when memory was scarce and
expensive. With a modern digital scope you can store a long stream of
data and should find the glitch.


tontaub
 

Hi Don, thanks, I see there are several approaches.
All the comments help me to make up my mind and see the options I have to upgrade my instrumentation gear.

Fair enough Michael but you asked if there was something else available
that would find your glitch and I was just suggesting a later scope
could do that. Your analysis of the problem is worth while and I don't
want to doubt it. Hope you find a way to resolve your problem with the
equipment you have available.
At some point I will - and again, all the comments on my postings the past days help me to get a better picture of the situation and to eventually decide what to do. Alas Im on a tight budget so it's important to take my time.

Don Black.
PS As I said, I'm no expert on DSOs and hope I can learn something from
it too. I have enough faith in this forum that if I say something stupid
someone knowledgeable will correct it.
I'm a musician in the first place and have some technical background which helps me to deal with the hardware and software I need and use.
And yes, the expertise in this forum is wonderful!
Thank you all for that!
Michael.

On 17-Aug-13 10:10 PM, tontaub wrote:

Hi Don,
no, in particular I was referring to something like a TDS2014C:
http://www.tek.com/oscilloscope/tds2000-digital-storage-oscilloscope
And I agree, this small amount of data storage made me wonder, also
because of this I started this thread.
But David made it clear with the examples he showed how this is
intended to work.
To find a glitch like I tried to describe in a straight forward
fashion by sampling the entire signal it would take something like
ten times this amount of storage.

[2ms error seen up to the 10th harmonic (i.e. to quantify detail in
the time domain) gives 1/0.002 * 10 * 2 = 10kHz Sampling rate. So for
a 2s signal I'd need 10k*2 = 20000 samples at least]
I see there are products w/ 20kpts and 100kpts/ch at the same price
range of a TDS2014C from a direct competitor of Tek - but I wonder
what other implications that products have.

Michael.

--- In TekScopes@... <mailto:TekScopes%40yahoogroups.com>,
Don Black <donald_black@> wrote:

I assume you mean a digital storage scope, not a sampling scope that
needs a repetitive signal. 2500 samples are very few by todays
technology, I guess you're talking of an era when memory was scarce and
expensive. With a modern digital scope you can store a long stream of
data and should find the glitch.


tontaub
 

--- In TekScopes@..., "cheater00 ." <cheater00@...> wrote:


You can use two scopes, one set to record slow data, one set to record
the glitch.
_If_ I had two scopes which allowed me to do that ... ;-)
m.


tontaub
 

--- In TekScopes@..., David <davidwhess@...> wrote:

I was thinking of the TDS2000C series when I used examples of a 2500
point record length. The 2230 or 2232 only has a record length of
1000 or 4000 points. That is short by today's standards but with
delayed sweep and peak detection support a short record length is not
as limiting as it seems.
with this approach I don't find it that limiting at all.
And if the instrumentation device supports me to identify a faulty situation quicker, I appreciate that.
OTOH, endlessly scrolling through sampled data is not too exciting either. ;-)
Using proper instrumentation techniques is crucial anyway.

The reason those Tektronix oscilloscopes have such short record
lengths is that they use the embedded memory in an FPGA or similar
without external SRAM. It is less expensive, actually it is close to
free, to add delayed sweep support than it is to add external fast
SRAM. Even the TDS3000C series only has a 10k point record length.

I would rather have fast waveform acquisition rates than long record
lengths and the two can be somewhat at odds.
makes sense. I'd rather prefer more detail than huge amounts of data,
if comes to the decision where to spend the money, for instance.


On Sat, 17 Aug 2013 12:10:41 -0000, you wrote:

Hi Don,
no, in particular I was referring to something like a TDS2014C:
http://www.tek.com/oscilloscope/tds2000-digital-storage-oscilloscope
And I agree, this small amount of data storage made me wonder, also because of this I started this thread.
But David made it clear with the examples he showed how this is intended to work.
To find a glitch like I tried to describe in a straight forward fashion by sampling the entire signal it would take something like
ten times this amount of storage.

[2ms error seen up to the 10th harmonic (i.e. to quantify detail in the time domain) gives 1/0.002 * 10 * 2 = 10kHz Sampling rate. So for a 2s signal I'd need 10k*2 = 20000 samples at least]
I see there are products w/ 20kpts and 100kpts/ch at the same price range of a TDS2014C from a direct competitor of Tek - but I wonder what other implications that products have.

Michael.

--- In TekScopes@..., Don Black <donald_black@> wrote:

I assume you mean a digital storage scope, not a sampling scope that
needs a repetitive signal. 2500 samples are very few by todays
technology, I guess you're talking of an era when memory was scarce and
expensive. With a modern digital scope you can store a long stream of
data and should find the glitch.


 

On Sat, 17 Aug 2013 12:53:09 -0000, you wrote:

Hi David,

--- In TekScopes@..., David <davidwhess@...> wrote:

There are two reasons that recording and displaying a glitch shorter
than the sample rate is not a problem:

1. There is generally no antialiasing filter. Rise time is a
critical parameter for a time domain device like an oscilloscope
and having it change with sample rate would cause more problems
than it would solve.
An antialiasing filter would also interfere with and defeat the
purpose of . . .
aaah, I really wondered _if_ there would be something like an aa-filter - I'd have preferred to see the raw samples
and drawn my conclusions myself seeing the data.
Modern DSOs often implement some type of antialiasing filter *after*
digitization but every implementation I have seen so far including
that by Tektronix sacrifices transient response although this does
result in a better rise time specification. This is actually to be
expected:

http://en.wikipedia.org/wiki/Gibbs_phenomenon
http://en.wikipedia.org/wiki/Ringing_artifacts#Solutions
http://cp.literature.agilent.com/litweb/pdf/5988-8008EN.pdf

2. Any good DSO will have a glitch or peak detect mode where the
digitizer sample rate is maximum at every sweep speed.
Decimation is
then used to lower the sample rate to what will fit within the
record length but instead of selecting a single sample to store,
the minimum and maximum values from a set of samples is stored so
any glitch or noise will be recorded and shown. Since two values
are stored, the record length is halved.
now we are talking!
that sounds much more like instrumentation engineering! ;-)
Since the peak detection will take time as well is there something like a pre-record functionality to start the hi-res mode prior to the actual point of detection? otherwise it might happen that the event of error gets cut off.
Or is this just in theory because the circuits are fast enough anyway
related to the signal being scrutinised?
I am not quite sure what you are asking here. Peak detection does not
affect timing. It does have the effect of showing worst case noise.

To give a concrete example, a 2230 in peak detect mode will record
and show glitches as narrow as 100 nanoseconds no matter how slow
the sweep speed is. A 2232 will do the same down to 10 nanoseconds
and a 2430 will do the same down to 2 to 8 nanoseconds. Almost any
oscilloscope designed after those will do the same although
sometimes that feature gets left out like with some recent Rigols.
Of course after recording a glitch, where it occurred is only
recorded to a resolution limited by the decimated sample rate so if
your record length is 2500 points total and 250 points per division
and the sweep speed is 5 seconds per division for a recorded sample
rate of 50 points per second, then you would only know where the 10
nanosecond glitch occurred to within 40 milliseconds. It is 40
milliseconds and 20 not milliseconds because storing both a minimum > and maximum value for a given time period effectively halves the
record length.
it appears to me that just adding memory is more like a 'brute force approach' to get the data and it's probably cheaper than developing a good peak detector. Than again that's been around for decades anyway.
However, I remember the Tonmeisters at our institute have a decent DSO (MSO7054A) - I'll have a look at it to see what such a high profile device provides. ;-)
In an FPGA or ASIC based design peak detection during decimation is
almost free. In a discrete logic design like the 2230 it has a
significant cost. The 2440 series actually implements peak detection
before sampling using CCD technology which is unusual.

One problem with the brute force approach of increasing the record
length is that at some point it will limit the acquisition rate either
because of limited processing resources or because of latency. If the
record is 10 milliseconds long, then the acquisition rate cannot be
faster than 100 records per second.


 

Look on Tek's web site for Primers on DSO's, as well as application notes.

Short record lengths are not a problem for most signals, but there are cases where a high sample rate is needed in order to resolve a fast signal.  This higher rate will consume memory faster, in many cases making it impossible to measure the period of 2 narrow signals if they are far apart. Fast sampling is needed to see the narrow signals, while slow sampling is required to get a long time record. Pick your poison.
Most DSO's allow you to put the trigger point in the center, or at either end of the trace.
If the scope has complex triggering, you can trigger on the fault & see what preceded & what followed the event.
Some DSO's also have a Peak Detect mode, which can help in some cases; it functions much like a glitch latch in a logic analyzer.

 
HankC, Boston
WA1HOS