Hi all, I have recently stumbled upon the idea of using vidicons and CRTs for analog convolution. Convolution of a continuous incoming signal (say audio) with a FIR (finite impulse response) is one way of creating echo, simulating filters, etc. By far the only method of calculating convolution nowadays is the DFT (discrete Fourier transform), a form of the FFT (fast Fourier transform).
I was wondering if anyone used vidicons and can confirm if this scheme would work.
The first time I heard of vidicons being used together with CRTs was when I read up on old TV stations and how they converted from 50 to 60 Hz or between line amounts. They essentially filmed a TV, but the device eventually was an integrated CRT-Vidicon invention where the CRT and Vidicon both scanned horizontally. However, if you make them scan at 90 degrees of each other, you can use that for computation.
One way of calculating convolution is as follows. Let's assume for a second that the signal and the FIR take values from -1 to 1. For every sample of the signal you have coming in, call that sample s_t, you start outputting s_t * FIR - you output all samples of the FIR subsequently, scaled by s_t. So on this sample, you output s_t * FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, say s_t+1, you start outputting s_t+1 * FIR, while also continuing to play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, but also s_t * FIR_2.
Let's say you take a CRT and break it up into, say, a grid of 100 x 100 points. Scan them horizontally from top left to bottom right. Start out in the top left. Wait for a sample of the signal, say s_1, to come in. Horizontally, you start outputting - using the Z channel (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.
Wait for the next sample to come in, and meanwhile move to the 2nd line, and start on the second point. When sample s_2 comes in, start outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, which is s_2*FIR_99. Then wrap around to the start of the line 2 (without advancing to line 3), and output s_2*FIR_100.
So on the nth scan line, you start outputting at the nth "pixel". On the 100th scan line, you start at the 100th pixel. Then you wrap around to the first line and start on the 1st pixel.
Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 degrees. So it should be scanning vertically, top to bottom, left to right, starting from the top left, and ending on the bottom right. It has scan columns rather than scan rows.
For every sample you have coming in, as you are outputting the relevant row to the CRT, scan a single column. You can start before the row is done outputting, because all previous rows are done, and as for the latest row, you only care about the home pixel (the pixel that is first put on the screen, ie for the nth line it's the nth pixel). While scanning this, sum up the brightness over your whole scan, and that's your output. That just calculated a convolution between the FIR and the incoming signal. It's a 100-point convolution. Cool, right?
Here's an even cooler thing. Nothing says you have to make the FIR have as many points as there are rows. You can make the FIR a continuous signal. Then, you can scan that 100-row image using more than 100 columns. Say, scan it at 400 columns. This then just means that the FIR has a bandwidth 4x higher than the signal it's being applied to.
And finally, of course you can output different FIR for every incoming sample (for every incoming row). This would let you eg apply, via convolution, a parametric filter.
Regarding practical implementation of the CRT output. For simplicity, I described above a situation where you're outputting horizontal lines of variable brightness in a raster. However, to preserve the phosphor, when increasing the Z (brightness) of the trace, you would also add a little bit of a high-frequency signal, to make the line thicker. The vidicon and the subsequent averaging circuit will like it just as well, and probably even better. This way you can prevent incoming hot spots. Also it doesn't matter if the lines intersect - they might just as well all be at the same level. However, then the phosphor might get more hot spots, and the non-linearity of the phosphor brightness vs cathode current might come into play. The raster was just to illustrate the structure of the algorithm, but if you need some form of accuracy, then you might want to keep lines from intersecting, but at that point, why are you even doing this using a Vidicon?
|
|
Bonjour, convolution, cross spectrum, kepstrum, correlation and Fourier transform were first used in military and defense signal analysis eg radar, sonar in Cold War.
a special CRT plus image tube combo was used in 1950s..1960s for memory and other functions.
Later on lasers and holograms entered the real time image process.
Check the early history of computing/CRT on varois history of tech websites.
Bon Chance
Jon
|
|
Thanks Jean-Paul, how are lasers and holograms related? Thank you
toggle quoted messageShow quoted text
On Sat, Jan 22, 2022 at 10:45 AM Jean-Paul <jonpaul@ix.netcom.com> wrote: Bonjour, convolution, cross spectrum, kepstrum, correlation and Fourier transform were first used in military and defense signal analysis eg radar, sonar in Cold War.
a special CRT plus image tube combo was used in 1950s..1960s for memory and other functions.
Later on lasers and holograms entered the real time image process.
Check the early history of computing/CRT on varois history of tech websites.
Bon Chance
Jon
|
|
Optical correlation and convolution are analog methods that have been used for years. We did some of this years ago when examining the modulation transfer function of x-ray imaging systems. We used a combination analog/digital system for this. A scanning microdensitometer for scanning the film image of the knife edge and then the derivative for the line spread function and FFT for the MTF (sort of an abbreviated description). We put a shaft encoder on the pen driving mechanism for the microdensitometer (its output was a plot on paper) to digitize the signal. But I had colleagues who were doing optical deconvolution for noise reduction (provided they knew what the noise properties were though there are some publications on blind deconvolution but those use digital methods) and trying template matching in the Fourier domain for object detection. Using a vidicon to “read” a CRT sounds vaguely familiar to me, such as a tube that did this combining both in a single vacuum envelope. RCA had a storage tube called a Radechon that was used in early computing systems. It could do things like time base changes and noise reduction through integrating signals. There’s a copy of the Radechon advertisement here: http://coldwar-c4i.net/SAGE/EE0858-BC.htmlUnfortunately, the links don’t work but there are other articles and even the data sheet on the Radechon online. They turn up for sale as well - there is one on eBay now (just search “Radechon”). I’m not the seller and have no conflict of interest with him or her. Oh, and I purchased a Radechon several years ago on eBay. I added it to my collection of old computer stuff. If any of you worked with room-sized computers, many kept a Tek scope on a cart tucked away for use by the service folks. There was one (don’t remember which model, but I think it was a 465) on a cart in the computer room of the 7094 mod 2 I used in college. Steve H. On Sat, Jan 22, 2022 at 02:11 cheater cheater <cheater00social@gmail.com> wrote: Hi all, I have recently stumbled upon the idea of using vidicons and CRTs for analog convolution. Convolution of a continuous incoming signal (say audio) with a FIR (finite impulse response) is one way of creating echo, simulating filters, etc. By far the only method of calculating convolution nowadays is the DFT (discrete Fourier transform), a form of the FFT (fast Fourier transform).
I was wondering if anyone used vidicons and can confirm if this scheme would work.
The first time I heard of vidicons being used together with CRTs was when I read up on old TV stations and how they converted from 50 to 60 Hz or between line amounts. They essentially filmed a TV, but the device eventually was an integrated CRT-Vidicon invention where the CRT and Vidicon both scanned horizontally. However, if you make them scan at 90 degrees of each other, you can use that for computation.
One way of calculating convolution is as follows. Let's assume for a second that the signal and the FIR take values from -1 to 1. For every sample of the signal you have coming in, call that sample s_t, you start outputting s_t * FIR - you output all samples of the FIR subsequently, scaled by s_t. So on this sample, you output s_t * FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, say s_t+1, you start outputting s_t+1 * FIR, while also continuing to play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, but also s_t * FIR_2.
Let's say you take a CRT and break it up into, say, a grid of 100 x 100 points. Scan them horizontally from top left to bottom right. Start out in the top left. Wait for a sample of the signal, say s_1, to come in. Horizontally, you start outputting - using the Z channel (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.
Wait for the next sample to come in, and meanwhile move to the 2nd line, and start on the second point. When sample s_2 comes in, start outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, which is s_2*FIR_99. Then wrap around to the start of the line 2 (without advancing to line 3), and output s_2*FIR_100.
So on the nth scan line, you start outputting at the nth "pixel". On the 100th scan line, you start at the 100th pixel. Then you wrap around to the first line and start on the 1st pixel.
Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 degrees. So it should be scanning vertically, top to bottom, left to right, starting from the top left, and ending on the bottom right. It has scan columns rather than scan rows.
For every sample you have coming in, as you are outputting the relevant row to the CRT, scan a single column. You can start before the row is done outputting, because all previous rows are done, and as for the latest row, you only care about the home pixel (the pixel that is first put on the screen, ie for the nth line it's the nth pixel). While scanning this, sum up the brightness over your whole scan, and that's your output. That just calculated a convolution between the FIR and the incoming signal. It's a 100-point convolution. Cool, right?
Here's an even cooler thing. Nothing says you have to make the FIR have as many points as there are rows. You can make the FIR a continuous signal. Then, you can scan that 100-row image using more than 100 columns. Say, scan it at 400 columns. This then just means that the FIR has a bandwidth 4x higher than the signal it's being applied to.
And finally, of course you can output different FIR for every incoming sample (for every incoming row). This would let you eg apply, via convolution, a parametric filter.
Regarding practical implementation of the CRT output. For simplicity, I described above a situation where you're outputting horizontal lines of variable brightness in a raster. However, to preserve the phosphor, when increasing the Z (brightness) of the trace, you would also add a little bit of a high-frequency signal, to make the line thicker. The vidicon and the subsequent averaging circuit will like it just as well, and probably even better. This way you can prevent incoming hot spots. Also it doesn't matter if the lines intersect - they might just as well all be at the same level. However, then the phosphor might get more hot spots, and the non-linearity of the phosphor brightness vs cathode current might come into play. The raster was just to illustrate the structure of the algorithm, but if you need some form of accuracy, then you might want to keep lines from intersecting, but at that point, why are you even doing this using a Vidicon?
|
|
That pattern detection sounds like optical deconvolution using shapes encoded into lenses, is that what you were doing? I think it was used for weapons targeting...
toggle quoted messageShow quoted text
On Sat, Jan 22, 2022 at 4:04 PM stevenhorii <sonodocsch@gmail.com> wrote: Optical correlation and convolution are analog methods that have been used for years. We did some of this years ago when examining the modulation transfer function of x-ray imaging systems. We used a combination analog/digital system for this. A scanning microdensitometer for scanning the film image of the knife edge and then the derivative for the line spread function and FFT for the MTF (sort of an abbreviated description). We put a shaft encoder on the pen driving mechanism for the microdensitometer (its output was a plot on paper) to digitize the signal. But I had colleagues who were doing optical deconvolution for noise reduction (provided they knew what the noise properties were though there are some publications on blind deconvolution but those use digital methods) and trying template matching in the Fourier domain for object detection. Using a vidicon to “read” a CRT sounds vaguely familiar to me, such as a tube that did this combining both in a single vacuum envelope.
RCA had a storage tube called a Radechon that was used in early computing systems. It could do things like time base changes and noise reduction through integrating signals. There’s a copy of the Radechon advertisement here:
http://coldwar-c4i.net/SAGE/EE0858-BC.html
Unfortunately, the links don’t work but there are other articles and even the data sheet on the Radechon online. They turn up for sale as well - there is one on eBay now (just search “Radechon”). I’m not the seller and have no conflict of interest with him or her. Oh, and I purchased a Radechon several years ago on eBay. I added it to my collection of old computer stuff. If any of you worked with room-sized computers, many kept a Tek scope on a cart tucked away for use by the service folks. There was one (don’t remember which model, but I think it was a 465) on a cart in the computer room of the 7094 mod 2 I used in college.
Steve H.
On Sat, Jan 22, 2022 at 02:11 cheater cheater <cheater00social@gmail.com> wrote:
Hi all, I have recently stumbled upon the idea of using vidicons and CRTs for analog convolution. Convolution of a continuous incoming signal (say audio) with a FIR (finite impulse response) is one way of creating echo, simulating filters, etc. By far the only method of calculating convolution nowadays is the DFT (discrete Fourier transform), a form of the FFT (fast Fourier transform).
I was wondering if anyone used vidicons and can confirm if this scheme would work.
The first time I heard of vidicons being used together with CRTs was when I read up on old TV stations and how they converted from 50 to 60 Hz or between line amounts. They essentially filmed a TV, but the device eventually was an integrated CRT-Vidicon invention where the CRT and Vidicon both scanned horizontally. However, if you make them scan at 90 degrees of each other, you can use that for computation.
One way of calculating convolution is as follows. Let's assume for a second that the signal and the FIR take values from -1 to 1. For every sample of the signal you have coming in, call that sample s_t, you start outputting s_t * FIR - you output all samples of the FIR subsequently, scaled by s_t. So on this sample, you output s_t * FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, say s_t+1, you start outputting s_t+1 * FIR, while also continuing to play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, but also s_t * FIR_2.
Let's say you take a CRT and break it up into, say, a grid of 100 x 100 points. Scan them horizontally from top left to bottom right. Start out in the top left. Wait for a sample of the signal, say s_1, to come in. Horizontally, you start outputting - using the Z channel (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.
Wait for the next sample to come in, and meanwhile move to the 2nd line, and start on the second point. When sample s_2 comes in, start outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, which is s_2*FIR_99. Then wrap around to the start of the line 2 (without advancing to line 3), and output s_2*FIR_100.
So on the nth scan line, you start outputting at the nth "pixel". On the 100th scan line, you start at the 100th pixel. Then you wrap around to the first line and start on the 1st pixel.
Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 degrees. So it should be scanning vertically, top to bottom, left to right, starting from the top left, and ending on the bottom right. It has scan columns rather than scan rows.
For every sample you have coming in, as you are outputting the relevant row to the CRT, scan a single column. You can start before the row is done outputting, because all previous rows are done, and as for the latest row, you only care about the home pixel (the pixel that is first put on the screen, ie for the nth line it's the nth pixel). While scanning this, sum up the brightness over your whole scan, and that's your output. That just calculated a convolution between the FIR and the incoming signal. It's a 100-point convolution. Cool, right?
Here's an even cooler thing. Nothing says you have to make the FIR have as many points as there are rows. You can make the FIR a continuous signal. Then, you can scan that 100-row image using more than 100 columns. Say, scan it at 400 columns. This then just means that the FIR has a bandwidth 4x higher than the signal it's being applied to.
And finally, of course you can output different FIR for every incoming sample (for every incoming row). This would let you eg apply, via convolution, a parametric filter.
Regarding practical implementation of the CRT output. For simplicity, I described above a situation where you're outputting horizontal lines of variable brightness in a raster. However, to preserve the phosphor, when increasing the Z (brightness) of the trace, you would also add a little bit of a high-frequency signal, to make the line thicker. The vidicon and the subsequent averaging circuit will like it just as well, and probably even better. This way you can prevent incoming hot spots. Also it doesn't matter if the lines intersect - they might just as well all be at the same level. However, then the phosphor might get more hot spots, and the non-linearity of the phosphor brightness vs cathode current might come into play. The raster was just to illustrate the structure of the algorithm, but if you need some form of accuracy, then you might want to keep lines from intersecting, but at that point, why are you even doing this using a Vidicon?
|
|
Its amazing to me. The only new thing is the history you dont know. This is why I add a lot of short history lessons on technology in my classes. I find younger people really dont know much in spite of the internet. They are very poor researchers. The big thing in the late 'fifties early sixties was side looking synthetic aperture radar, some versions called SLAR. We literally "took pictures" of the the radar display of the SLAR and built images from the film after the plane had landed. An attempt was made to process the film onboard the aircraft to create near real time maps of enemy terrain (russkies).The Russians took a different approach: they built an optical processor that interfaced with their radar display that did the convolution and presentation of the terrain in real time aboard the aircraft. Very advanced thinking. BTW: lasers and holograms go side-by side: A coherent source is required for scene construction. All the way back to 1963, IIRC. Also an ex-boss of mine did holography in 1964 with MMW radar. An excellent little book is called "Radar,Sonar & Holography". My student get to read a version of my copy if they desire, by hardly any do....Regards,Jeff Kruth In a message dated 1/22/2022 10:04:14 AM Eastern Standard Time, sonodocsch@gmail.com writes: Optical correlation and convolution are analog methods that have been used for years. We did some of this years ago when examining the modulation transfer function of x-ray imaging systems. We used a combination analog/digital system for this. A scanning microdensitometer for scanning the film image of the knife edge and then the derivative for the line spread function and FFT for the MTF (sort of an abbreviated description). We put a shaft encoder on the pen driving mechanism for the microdensitometer (its output was a plot on paper) to digitize the signal. But I had colleagues who were doing optical deconvolution for noise reduction (provided they knew what the noise properties were though there are some publications on blind deconvolution but those use digital methods) and trying template matching in the Fourier domain for object detection. Using a vidicon to “read” a CRT sounds vaguely familiar to me, such as a tube that did this combining both in a single vacuum envelope. RCA had a storage tube called a Radechon that was used in early computing systems. It could do things like time base changes and noise reduction through integrating signals. There’s a copy of the Radechon advertisement here: http://coldwar-c4i.net/SAGE/EE0858-BC.htmlUnfortunately, the links don’t work but there are other articles and even the data sheet on the Radechon online. They turn up for sale as well - there is one on eBay now (just search “Radechon”). I’m not the seller and have no conflict of interest with him or her. Oh, and I purchased a Radechon several years ago on eBay. I added it to my collection of old computer stuff. If any of you worked with room-sized computers, many kept a Tek scope on a cart tucked away for use by the service folks. There was one (don’t remember which model, but I think it was a 465) on a cart in the computer room of the 7094 mod 2 I used in college. Steve H. On Sat, Jan 22, 2022 at 02:11 cheater cheater <cheater00social@gmail.com> wrote: Hi all, I have recently stumbled upon the idea of using vidicons and CRTs for analog convolution. Convolution of a continuous incoming signal (say audio) with a FIR (finite impulse response) is one way of creating echo, simulating filters, etc. By far the only method of calculating convolution nowadays is the DFT (discrete Fourier transform), a form of the FFT (fast Fourier transform).
I was wondering if anyone used vidicons and can confirm if this scheme would work.
The first time I heard of vidicons being used together with CRTs was when I read up on old TV stations and how they converted from 50 to 60 Hz or between line amounts. They essentially filmed a TV, but the device eventually was an integrated CRT-Vidicon invention where the CRT and Vidicon both scanned horizontally. However, if you make them scan at 90 degrees of each other, you can use that for computation.
One way of calculating convolution is as follows. Let's assume for a second that the signal and the FIR take values from -1 to 1. For every sample of the signal you have coming in, call that sample s_t, you start outputting s_t * FIR - you output all samples of the FIR subsequently, scaled by s_t. So on this sample, you output s_t * FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, say s_t+1, you start outputting s_t+1 * FIR, while also continuing to play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, but also s_t * FIR_2.
Let's say you take a CRT and break it up into, say, a grid of 100 x 100 points. Scan them horizontally from top left to bottom right. Start out in the top left. Wait for a sample of the signal, say s_1, to come in. Horizontally, you start outputting - using the Z channel (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.
Wait for the next sample to come in, and meanwhile move to the 2nd line, and start on the second point. When sample s_2 comes in, start outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, which is s_2*FIR_99. Then wrap around to the start of the line 2 (without advancing to line 3), and output s_2*FIR_100.
So on the nth scan line, you start outputting at the nth "pixel". On the 100th scan line, you start at the 100th pixel. Then you wrap around to the first line and start on the 1st pixel.
Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 degrees. So it should be scanning vertically, top to bottom, left to right, starting from the top left, and ending on the bottom right. It has scan columns rather than scan rows.
For every sample you have coming in, as you are outputting the relevant row to the CRT, scan a single column. You can start before the row is done outputting, because all previous rows are done, and as for the latest row, you only care about the home pixel (the pixel that is first put on the screen, ie for the nth line it's the nth pixel). While scanning this, sum up the brightness over your whole scan, and that's your output. That just calculated a convolution between the FIR and the incoming signal. It's a 100-point convolution. Cool, right?
Here's an even cooler thing. Nothing says you have to make the FIR have as many points as there are rows. You can make the FIR a continuous signal. Then, you can scan that 100-row image using more than 100 columns. Say, scan it at 400 columns. This then just means that the FIR has a bandwidth 4x higher than the signal it's being applied to.
And finally, of course you can output different FIR for every incoming sample (for every incoming row). This would let you eg apply, via convolution, a parametric filter.
Regarding practical implementation of the CRT output. For simplicity, I described above a situation where you're outputting horizontal lines of variable brightness in a raster. However, to preserve the phosphor, when increasing the Z (brightness) of the trace, you would also add a little bit of a high-frequency signal, to make the line thicker. The vidicon and the subsequent averaging circuit will like it just as well, and probably even better. This way you can prevent incoming hot spots. Also it doesn't matter if the lines intersect - they might just as well all be at the same level. However, then the phosphor might get more hot spots, and the non-linearity of the phosphor brightness vs cathode current might come into play. The raster was just to illustrate the structure of the algorithm, but if you need some form of accuracy, then you might want to keep lines from intersecting, but at that point, why are you even doing this using a Vidicon?
|
|
right, but I'm still lost, what do holograms have to do with convolution? Thanks On Sat, Jan 22, 2022 at 4:46 PM Jeff Kruth via groups.io <kmec=aol.com@groups.io> wrote: Its amazing to me. The only new thing is the history you dont know. This is why I add a lot of short history lessons on technology in my classes. I find younger people really dont know much in spite of the internet. They are very poor researchers. The big thing in the late 'fifties early sixties was side looking synthetic aperture radar, some versions called SLAR. We literally "took pictures" of the the radar display of the SLAR and built images from the film after the plane had landed. An attempt was made to process the film onboard the aircraft to create near real time maps of enemy terrain (russkies).The Russians took a different approach: they built an optical processor that interfaced with their radar display that did the convolution and presentation of the terrain in real time aboard the aircraft. Very advanced thinking. BTW: lasers and holograms go side-by side: A coherent source is required for scene construction. All the way back to 1963, IIRC. Also an ex-boss of mine did holography in 1964 with MMW radar. An excellent little book is called "Radar,Sonar & Holography". My student get to read a version of my copy if they desire, by hardly any do....Regards,Jeff Kruth In a message dated 1/22/2022 10:04:14 AM Eastern Standard Time, sonodocsch@gmail.com writes: Optical correlation and convolution are analog methods that have been used for years. We did some of this years ago when examining the modulation transfer function of x-ray imaging systems. We used a combination analog/digital system for this. A scanning microdensitometer for scanning the film image of the knife edge and then the derivative for the line spread function and FFT for the MTF (sort of an abbreviated description). We put a shaft encoder on the pen driving mechanism for the microdensitometer (its output was a plot on paper) to digitize the signal. But I had colleagues who were doing optical deconvolution for noise reduction (provided they knew what the noise properties were though there are some publications on blind deconvolution but those use digital methods) and trying template matching in the Fourier domain for object detection. Using a vidicon to “read” a CRT sounds vaguely familiar to me, such as a tube that did this combining both in a single vacuum envelope.
RCA had a storage tube called a Radechon that was used in early computing systems. It could do things like time base changes and noise reduction through integrating signals. There’s a copy of the Radechon advertisement here:
http://coldwar-c4i.net/SAGE/EE0858-BC.html
Unfortunately, the links don’t work but there are other articles and even the data sheet on the Radechon online. They turn up for sale as well - there is one on eBay now (just search “Radechon”). I’m not the seller and have no conflict of interest with him or her. Oh, and I purchased a Radechon several years ago on eBay. I added it to my collection of old computer stuff. If any of you worked with room-sized computers, many kept a Tek scope on a cart tucked away for use by the service folks. There was one (don’t remember which model, but I think it was a 465) on a cart in the computer room of the 7094 mod 2 I used in college.
Steve H.
On Sat, Jan 22, 2022 at 02:11 cheater cheater <cheater00social@gmail.com> wrote:
Hi all, I have recently stumbled upon the idea of using vidicons and CRTs for analog convolution. Convolution of a continuous incoming signal (say audio) with a FIR (finite impulse response) is one way of creating echo, simulating filters, etc. By far the only method of calculating convolution nowadays is the DFT (discrete Fourier transform), a form of the FFT (fast Fourier transform).
I was wondering if anyone used vidicons and can confirm if this scheme would work.
The first time I heard of vidicons being used together with CRTs was when I read up on old TV stations and how they converted from 50 to 60 Hz or between line amounts. They essentially filmed a TV, but the device eventually was an integrated CRT-Vidicon invention where the CRT and Vidicon both scanned horizontally. However, if you make them scan at 90 degrees of each other, you can use that for computation.
One way of calculating convolution is as follows. Let's assume for a second that the signal and the FIR take values from -1 to 1. For every sample of the signal you have coming in, call that sample s_t, you start outputting s_t * FIR - you output all samples of the FIR subsequently, scaled by s_t. So on this sample, you output s_t * FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, say s_t+1, you start outputting s_t+1 * FIR, while also continuing to play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, but also s_t * FIR_2.
Let's say you take a CRT and break it up into, say, a grid of 100 x 100 points. Scan them horizontally from top left to bottom right. Start out in the top left. Wait for a sample of the signal, say s_1, to come in. Horizontally, you start outputting - using the Z channel (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.
Wait for the next sample to come in, and meanwhile move to the 2nd line, and start on the second point. When sample s_2 comes in, start outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, which is s_2*FIR_99. Then wrap around to the start of the line 2 (without advancing to line 3), and output s_2*FIR_100.
So on the nth scan line, you start outputting at the nth "pixel". On the 100th scan line, you start at the 100th pixel. Then you wrap around to the first line and start on the 1st pixel.
Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 degrees. So it should be scanning vertically, top to bottom, left to right, starting from the top left, and ending on the bottom right. It has scan columns rather than scan rows.
For every sample you have coming in, as you are outputting the relevant row to the CRT, scan a single column. You can start before the row is done outputting, because all previous rows are done, and as for the latest row, you only care about the home pixel (the pixel that is first put on the screen, ie for the nth line it's the nth pixel). While scanning this, sum up the brightness over your whole scan, and that's your output. That just calculated a convolution between the FIR and the incoming signal. It's a 100-point convolution. Cool, right?
Here's an even cooler thing. Nothing says you have to make the FIR have as many points as there are rows. You can make the FIR a continuous signal. Then, you can scan that 100-row image using more than 100 columns. Say, scan it at 400 columns. This then just means that the FIR has a bandwidth 4x higher than the signal it's being applied to.
And finally, of course you can output different FIR for every incoming sample (for every incoming row). This would let you eg apply, via convolution, a parametric filter.
Regarding practical implementation of the CRT output. For simplicity, I described above a situation where you're outputting horizontal lines of variable brightness in a raster. However, to preserve the phosphor, when increasing the Z (brightness) of the trace, you would also add a little bit of a high-frequency signal, to make the line thicker. The vidicon and the subsequent averaging circuit will like it just as well, and probably even better. This way you can prevent incoming hot spots. Also it doesn't matter if the lines intersect - they might just as well all be at the same level. However, then the phosphor might get more hot spots, and the non-linearity of the phosphor brightness vs cathode current might come into play. The raster was just to illustrate the structure of the algorithm, but if you need some form of accuracy, then you might want to keep lines from intersecting, but at that point, why are you even doing this using a Vidicon?
|
|
The engineers who were looking at template matching in Fourier space were certainly aware of what the defense guys were doing (that was not classified), but they were interested in fairly “mundane” applications by comparison. They were looking at the technique for automated character recognition. This is still difficult which is why sites use CAPTCHA - those “I’m not a robot” things. Character recognition against a noisy background is still a difficult challenge. There was some interest in looking at the Fourier transforms of lung disease as many of the non-focal lung diseases result in very fine (think high spatial frequency) detail changes in the lungs. An optical Fourier transform is effectively done at light speed and you can at least see the magnitude. Spectral displays are still currently used in ultrasound. Doppler signals from moving blood carry information besides the usual velocity signal. Turbulence caused by narrowed vessels results in a wider Doppler spectrum. This can be displayed as the actual spectrum but more commonly as a “third dimension” in addition to time and velocity - the waveforms are given a grayscale value that represents the integrated spectrum. Brighter grayscale values represent a broader spectrum and so are indicators of turbulent flow. In 2D blood flow images, the velocities can be coded in color. Turbulence looks like colored confetti; laminar flow, not so much. Experienced sonographers and the physicians who interpret these studies can also usually hear turbulence by listening to the Doppler signal. I’d bet that submarine sonar operators have ways of listening and looking for turbulence. Low-pass filtering to generate an “unsharp mask” was used to sharpen images (photographers did this) by subtracting the unsharp image from the one you wanted to sharpen. This could be done in an analog fashion in an x-ray darkroom, but was cumbersome. Digital image processing completely replaced that. There was an analog subtraction system used for many years. In angiography, the goal is often to see the blood vessels and not the stuff around them. So an image was taken of the anatomical area of interest before contrast injection, then a series of contrast images were taken (the equipment for that was quite something to see -and hear - in operation; imagine zipping five 14 x 14-inch films per second through a film handling device). In the darkroom, the darkroom technician would first make a negative of the non-contrast image. It was then sandwiched with the contrast image and the result was sandwiched with a piece of unexposed film and the whole stack was exposed in a special light box. The result was “subtraction” of the tissues that did not contain contrast leaving the vessels standing out. All analog. I think that early systems did use a storage tube for this in the angigraphic room - the fluoroscopic video signal was inverted and stored for the subtraction mask then the contrast images and subtraction mask were combined for the display. I am not sure of the electronics for this but basically, the two images were added. Again, digital techniques rapidly replaced this once they could process the images at a reasonable speed. Part of my interest in radiology and why I chose it as a career was because of the intersection of engineering and physics with healthcare. The history of this is fascinating. One of the first attempts at CT-scan like ultrasound was a machine built out of a B-17 ball turret to move the transducer around the patient (well, volunteer; this was never used clinically). Oh, and we had Tek scopes and monitors all over the place. Mostly Tek displays in ultrasound (they were used to display the Doppler waveforms and spectra) and they were favored for many of the diagnostic displays for fluoroscopy. Tek had really excellent display-to-display consistency. The health physics folks and the equipment maintenance people used almost exclusively Tek scopes for troubleshooting and maintenance. Those were the days, but digital technology has revolutionized medical imaging as it has for so much of what we do now. Steve H. On Sat, Jan 22, 2022 at 10:04 stevenhorii via groups.io <sonodocsch= gmail.com@groups.io> wrote: Optical correlation and convolution are analog methods that have been used for years. We did some of this years ago when examining the modulation transfer function of x-ray imaging systems. We used a combination analog/digital system for this. A scanning microdensitometer for scanning the film image of the knife edge and then the derivative for the line spread function and FFT for the MTF (sort of an abbreviated description). We put a shaft encoder on the pen driving mechanism for the microdensitometer (its output was a plot on paper) to digitize the signal. But I had colleagues who were doing optical deconvolution for noise reduction (provided they knew what the noise properties were though there are some publications on blind deconvolution but those use digital methods) and trying template matching in the Fourier domain for object detection. Using a vidicon to “read” a CRT sounds vaguely familiar to me, such as a tube that did this combining both in a single vacuum envelope.
RCA had a storage tube called a Radechon that was used in early computing systems. It could do things like time base changes and noise reduction through integrating signals. There’s a copy of the Radechon advertisement here:
http://coldwar-c4i.net/SAGE/EE0858-BC.html
Unfortunately, the links don’t work but there are other articles and even the data sheet on the Radechon online. They turn up for sale as well - there is one on eBay now (just search “Radechon”). I’m not the seller and have no conflict of interest with him or her. Oh, and I purchased a Radechon several years ago on eBay. I added it to my collection of old computer stuff. If any of you worked with room-sized computers, many kept a Tek scope on a cart tucked away for use by the service folks. There was one (don’t remember which model, but I think it was a 465) on a cart in the computer room of the 7094 mod 2 I used in college.
Steve H.
On Sat, Jan 22, 2022 at 02:11 cheater cheater <cheater00social@gmail.com> wrote:
Hi all, I have recently stumbled upon the idea of using vidicons and CRTs for analog convolution. Convolution of a continuous incoming signal (say audio) with a FIR (finite impulse response) is one way of creating echo, simulating filters, etc. By far the only method of calculating convolution nowadays is the DFT (discrete Fourier transform), a form of the FFT (fast Fourier transform).
I was wondering if anyone used vidicons and can confirm if this scheme would work.
The first time I heard of vidicons being used together with CRTs was when I read up on old TV stations and how they converted from 50 to 60 Hz or between line amounts. They essentially filmed a TV, but the device eventually was an integrated CRT-Vidicon invention where the CRT and Vidicon both scanned horizontally. However, if you make them scan at 90 degrees of each other, you can use that for computation.
One way of calculating convolution is as follows. Let's assume for a second that the signal and the FIR take values from -1 to 1. For every sample of the signal you have coming in, call that sample s_t, you start outputting s_t * FIR - you output all samples of the FIR subsequently, scaled by s_t. So on this sample, you output s_t * FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, say s_t+1, you start outputting s_t+1 * FIR, while also continuing to play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, but also s_t * FIR_2.
Let's say you take a CRT and break it up into, say, a grid of 100 x 100 points. Scan them horizontally from top left to bottom right. Start out in the top left. Wait for a sample of the signal, say s_1, to come in. Horizontally, you start outputting - using the Z channel (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.
Wait for the next sample to come in, and meanwhile move to the 2nd line, and start on the second point. When sample s_2 comes in, start outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, which is s_2*FIR_99. Then wrap around to the start of the line 2 (without advancing to line 3), and output s_2*FIR_100.
So on the nth scan line, you start outputting at the nth "pixel". On the 100th scan line, you start at the 100th pixel. Then you wrap around to the first line and start on the 1st pixel.
Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 degrees. So it should be scanning vertically, top to bottom, left to right, starting from the top left, and ending on the bottom right. It has scan columns rather than scan rows.
For every sample you have coming in, as you are outputting the relevant row to the CRT, scan a single column. You can start before the row is done outputting, because all previous rows are done, and as for the latest row, you only care about the home pixel (the pixel that is first put on the screen, ie for the nth line it's the nth pixel). While scanning this, sum up the brightness over your whole scan, and that's your output. That just calculated a convolution between the FIR and the incoming signal. It's a 100-point convolution. Cool, right?
Here's an even cooler thing. Nothing says you have to make the FIR have as many points as there are rows. You can make the FIR a continuous signal. Then, you can scan that 100-row image using more than 100 columns. Say, scan it at 400 columns. This then just means that the FIR has a bandwidth 4x higher than the signal it's being applied to.
And finally, of course you can output different FIR for every incoming sample (for every incoming row). This would let you eg apply, via convolution, a parametric filter.
Regarding practical implementation of the CRT output. For simplicity, I described above a situation where you're outputting horizontal lines of variable brightness in a raster. However, to preserve the phosphor, when increasing the Z (brightness) of the trace, you would also add a little bit of a high-frequency signal, to make the line thicker. The vidicon and the subsequent averaging circuit will like it just as well, and probably even better. This way you can prevent incoming hot spots. Also it doesn't matter if the lines intersect - they might just as well all be at the same level. However, then the phosphor might get more hot spots, and the non-linearity of the phosphor brightness vs cathode current might come into play. The raster was just to illustrate the structure of the algorithm, but if you need some form of accuracy, then you might want to keep lines from intersecting, but at that point, why are you even doing this using a Vidicon?
|
|
Interleaved: On 1/22/2022 2:11 AM, cheater cheater wrote: Hi all, I have recently stumbled upon the idea of using vidicons and CRTs for analog convolution. Convolution of a continuous incoming signal (say audio) with a FIR (finite impulse response) is one way of creating echo, simulating filters, etc. By far the only method of calculating convolution nowadays is the DFT (discrete Fourier transform), a form of the FFT (fast Fourier transform).
I was wondering if anyone used vidicons and can confirm if this scheme would work. I used to build vidicon cameras at the hobby level. The first time I heard of vidicons being used together with CRTs was when I read up on old TV stations and how they converted from 50 to 60 Hz or between line amounts. They essentially filmed a TV, but the device eventually was an integrated CRT-Vidicon invention where the CRT and Vidicon both scanned horizontally. However, if you make them scan at 90 degrees of each other, you can use that for computation.
You might also want to look up a Radecon tube which is much the same. Same kind of idea was used for PPI radar scan to TV conversion, early on. There's an easier way, though. 1) Scrap the vidicon idea and use a CCD camera. Already digital, lots smaller, and so on. 2) Use a TFT display instead of a CRT unless you use scope electronics. 3) OR use a dual port memory, writing values in one port, reading values from another port. Make it as deep as you want, as much resolution as you want. Random access for either side, for the "storage" side, use a read-modify-write scheme. Many FPGA's already have dual port memory of various sizes. You can scan convert an XY display by digitizing and then storing the appropriate pixel, persistence is simulated by hardware subtracting or clearing memory. Readout can drive a TFT RGB display directly. Harvey One way of calculating convolution is as follows. Let's assume for a second that the signal and the FIR take values from -1 to 1. For every sample of the signal you have coming in, call that sample s_t, you start outputting s_t * FIR - you output all samples of the FIR subsequently, scaled by s_t. So on this sample, you output s_t * FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, say s_t+1, you start outputting s_t+1 * FIR, while also continuing to play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, but also s_t * FIR_2.
Let's say you take a CRT and break it up into, say, a grid of 100 x 100 points. Scan them horizontally from top left to bottom right. Start out in the top left. Wait for a sample of the signal, say s_1, to come in. Horizontally, you start outputting - using the Z channel (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.
Wait for the next sample to come in, and meanwhile move to the 2nd line, and start on the second point. When sample s_2 comes in, start outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, which is s_2*FIR_99. Then wrap around to the start of the line 2 (without advancing to line 3), and output s_2*FIR_100.
So on the nth scan line, you start outputting at the nth "pixel". On the 100th scan line, you start at the 100th pixel. Then you wrap around to the first line and start on the 1st pixel.
Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 degrees. So it should be scanning vertically, top to bottom, left to right, starting from the top left, and ending on the bottom right. It has scan columns rather than scan rows.
For every sample you have coming in, as you are outputting the relevant row to the CRT, scan a single column. You can start before the row is done outputting, because all previous rows are done, and as for the latest row, you only care about the home pixel (the pixel that is first put on the screen, ie for the nth line it's the nth pixel). While scanning this, sum up the brightness over your whole scan, and that's your output. That just calculated a convolution between the FIR and the incoming signal. It's a 100-point convolution. Cool, right?
Here's an even cooler thing. Nothing says you have to make the FIR have as many points as there are rows. You can make the FIR a continuous signal. Then, you can scan that 100-row image using more than 100 columns. Say, scan it at 400 columns. This then just means that the FIR has a bandwidth 4x higher than the signal it's being applied to.
And finally, of course you can output different FIR for every incoming sample (for every incoming row). This would let you eg apply, via convolution, a parametric filter.
Regarding practical implementation of the CRT output. For simplicity, I described above a situation where you're outputting horizontal lines of variable brightness in a raster. However, to preserve the phosphor, when increasing the Z (brightness) of the trace, you would also add a little bit of a high-frequency signal, to make the line thicker. The vidicon and the subsequent averaging circuit will like it just as well, and probably even better. This way you can prevent incoming hot spots. Also it doesn't matter if the lines intersect - they might just as well all be at the same level. However, then the phosphor might get more hot spots, and the non-linearity of the phosphor brightness vs cathode current might come into play. The raster was just to illustrate the structure of the algorithm, but if you need some form of accuracy, then you might want to keep lines from intersecting, but at that point, why are you even doing this using a Vidicon?
|
|
Back in post #190166 cheater cheater said 'right, but I'm still lost, what do holograms have to do with convolution? Thanks'
In mathematical terms a convolution of two functions in one dimension is identical to just multiplying the Fourier transforns of the two functions in the 'conjugate' dimension (eg frequency if you started off with a signal varying in time) and then reverse transforming the product back in to the real space (eg time again).
A hologram is closely related to a two dimensional Fourier transform, if you put a mask in the plane of the hologram and reconstruct the image you have multiplied the transform by 0 or 1 depending on the shape of the mask and when you reconstruct the image it is the same as you would have got by convoluting by some strange function whose Fourier transform would have replicated the mask. Unfortunately it is not practical to make the necessary mask except in a few idealised applications. Some people even used masks containing half wave plates of a precise shape so that you had the choice of multiplying the hologram by +1, 0 or -1.
I hope I have got the terminology about right, this is from a long time ago!
Roger
|
|
It's kinda like a 2D autocorrelation function where you can get very fast results with little or no math. Now that we have "fast" CPUs with "fast" hardware, we need it less. Not sure that they figured out a good way to do it, though. Holographic memories and searches therein were supposed to work similarly, I think.
Harvey
toggle quoted messageShow quoted text
On 1/22/2022 12:40 PM, David Templeton wrote: 30+ years ago at uni we had a lecturer that raved about optical processing, not like optical cpu, thought that was mentioned too, but using analog lenses to provide instantaneous complex mathematical functions.
David
On 22 Jan 2022, at 17:00, Harvey White <madyn@dragonworks.info> wrote:
Interleaved:
On 1/22/2022 2:11 AM, cheater cheater wrote: Hi all, I have recently stumbled upon the idea of using vidicons and CRTs for analog convolution. Convolution of a continuous incoming signal (say audio) with a FIR (finite impulse response) is one way of creating echo, simulating filters, etc. By far the only method of calculating convolution nowadays is the DFT (discrete Fourier transform), a form of the FFT (fast Fourier transform).
I was wondering if anyone used vidicons and can confirm if this scheme would work. I used to build vidicon cameras at the hobby level.
The first time I heard of vidicons being used together with CRTs was when I read up on old TV stations and how they converted from 50 to 60 Hz or between line amounts. They essentially filmed a TV, but the device eventually was an integrated CRT-Vidicon invention where the CRT and Vidicon both scanned horizontally. However, if you make them scan at 90 degrees of each other, you can use that for computation. You might also want to look up a Radecon tube which is much the same. Same kind of idea was used for PPI radar scan to TV conversion, early on.
There's an easier way, though.
1) Scrap the vidicon idea and use a CCD camera. Already digital, lots smaller, and so on.
2) Use a TFT display instead of a CRT unless you use scope electronics.
3) OR use a dual port memory, writing values in one port, reading values from another port. Make it as deep as you want, as much resolution as you want. Random access for either side, for the "storage" side, use a read-modify-write scheme. Many FPGA's already have dual port memory of various sizes. You can scan convert an XY display by digitizing and then storing the appropriate pixel, persistence is simulated by hardware subtracting or clearing memory. Readout can drive a TFT RGB display directly.
Harvey
One way of calculating convolution is as follows. Let's assume for a second that the signal and the FIR take values from -1 to 1. For every sample of the signal you have coming in, call that sample s_t, you start outputting s_t * FIR - you output all samples of the FIR subsequently, scaled by s_t. So on this sample, you output s_t * FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, say s_t+1, you start outputting s_t+1 * FIR, while also continuing to play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, but also s_t * FIR_2.
Let's say you take a CRT and break it up into, say, a grid of 100 x 100 points. Scan them horizontally from top left to bottom right. Start out in the top left. Wait for a sample of the signal, say s_1, to come in. Horizontally, you start outputting - using the Z channel (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.
Wait for the next sample to come in, and meanwhile move to the 2nd line, and start on the second point. When sample s_2 comes in, start outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, which is s_2*FIR_99. Then wrap around to the start of the line 2 (without advancing to line 3), and output s_2*FIR_100.
So on the nth scan line, you start outputting at the nth "pixel". On the 100th scan line, you start at the 100th pixel. Then you wrap around to the first line and start on the 1st pixel.
Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 degrees. So it should be scanning vertically, top to bottom, left to right, starting from the top left, and ending on the bottom right. It has scan columns rather than scan rows.
For every sample you have coming in, as you are outputting the relevant row to the CRT, scan a single column. You can start before the row is done outputting, because all previous rows are done, and as for the latest row, you only care about the home pixel (the pixel that is first put on the screen, ie for the nth line it's the nth pixel). While scanning this, sum up the brightness over your whole scan, and that's your output. That just calculated a convolution between the FIR and the incoming signal. It's a 100-point convolution. Cool, right?
Here's an even cooler thing. Nothing says you have to make the FIR have as many points as there are rows. You can make the FIR a continuous signal. Then, you can scan that 100-row image using more than 100 columns. Say, scan it at 400 columns. This then just means that the FIR has a bandwidth 4x higher than the signal it's being applied to.
And finally, of course you can output different FIR for every incoming sample (for every incoming row). This would let you eg apply, via convolution, a parametric filter.
Regarding practical implementation of the CRT output. For simplicity, I described above a situation where you're outputting horizontal lines of variable brightness in a raster. However, to preserve the phosphor, when increasing the Z (brightness) of the trace, you would also add a little bit of a high-frequency signal, to make the line thicker. The vidicon and the subsequent averaging circuit will like it just as well, and probably even better. This way you can prevent incoming hot spots. Also it doesn't matter if the lines intersect - they might just as well all be at the same level. However, then the phosphor might get more hot spots, and the non-linearity of the phosphor brightness vs cathode current might come into play. The raster was just to illustrate the structure of the algorithm, but if you need some form of accuracy, then you might want to keep lines from intersecting, but at that point, why are you even doing this using a Vidicon?
|
|
In the 60’s and early 70’s there were scan converter units that consisted of a single tube looking like two CRTs assembled face-to-face. One half of the tube was actually a CRT and the other half had the makings of an image tube used in a television camera. The input side would scan to a target centered between the two halves and the other side would scan the target from the backside at its own scan rate to extract what was written. These units got heavy use in the television industry to convert from one video standard to another. In the computer industry they were used to convert the random output of computer data to a standard television format. A few examples: http://www.r-type.org/exhib/acl0001.htmhttp://lampes-et-tubes.info/sc/sc040.php?l=eAnd Tektronix even made them: https://w140.com/tekwiki/wiki/T7950If I can recall the scan converters we used in our data displays were made by Princeton Research. As for image systems in vintage aircraft, during the 70’s I was exposed to aircraft that actually had film maps that were stored in cartridges and used in the mapping displays in the cockpits. The entire display assembly was able to move the film map through any orientation including rotation. As the aircraft moved, the gyrocompass system would control the film image to coincide with the aircraft location thereby giving the pilot a real time position display. Different film cartridges were inserted on the flightline before a sortie to provide detailed mapping of the area of operation. Greg
|
|
|
|
Greg, I actually have a couple of these early moving map display things. At one time, I actually had a display that took the film cassettes. You noted that it would move the map to correspond to the aircraft's motion. The mechanism to do that was quite something. It had all manner of servo/synchro systems to scroll the map bidirectionally and then other mechanical movements to shift it. It was a masterpiece of mechanical engineering. The units I have now are semi-digital. They still take a film cassette (these are older - the newer ones took a DVD and the latest I presume use some sort of solid-state storage) and one I believe likely has optics and an image sensor (probably a CCD) though I had one that had a flying spot scanning system in it. It used a tiny CRT to generate the flying spot. These units I have are just to generate a map image; I presume it used a digital system to provide the rotated and translated image since the cockpit displays are now flat panel ones. The one that had the film and optics in it along with the display even had a system that could switch the projection bulb in case the one in use burned out. The highest resolution CRT I ever saw (and had) was from a film recording system made by Celco. It was a magnetic deflection tube. The tube was made by Litton but the precision deflection coils were made by Celco. It had some incredibly small spot size (measured in microns) but was not a full-frame display. It basically scanned a line at a time and would build up an image on the film. There was a color filter system to generate the colored images. I had the display and the electronics box, but no cables. I bought it at a GSA sale in a large lot. I also got a Tek 556 in that lot (and a Mars Mariner rocket engine, but that's a long story). I called Celco and they told me these systems were basically custom made. The cables were not stock but if I wanted a set, they would make them up for $5K. The system sat in storage until I got a call from the R/Greenberg folks who do a lot of titles for movies. They were looking for spare parts for their system and they bought it from me. I had an interesting system made for NASA. It had a pair of 16mm film projectors that projected images onto two small screens. The screens were surrounded by buttons. It was called a "multiparameter display". The films that were in it had various images, but some were images of meter faces. The system also had a pair of galvanometers that could project pointers onto the screens. This way, the screen became an analog meter and the scales could be changed depending on what meter face was displayed. Other displays were pages from manuals or menus with choices that lined up with the buttons so you could use them to select functions. It was some sort of prototype - I don't think NASA ever flew it, but it was a neat idea. It had a pair of large multipin circular connectors on the back, presumably to connect to the computer system or other processor that would run it. On Sat, Jan 22, 2022 at 1:42 PM Greg Muir via groups.io <big_sky_explorer= yahoo.com@groups.io> wrote: In the 60’s and early 70’s there were scan converter units that consisted of a single tube looking like two CRTs assembled face-to-face. One half of the tube was actually a CRT and the other half had the makings of an image tube used in a television camera. The input side would scan to a target centered between the two halves and the other side would scan the target from the backside at its own scan rate to extract what was written. These units got heavy use in the television industry to convert from one video standard to another. In the computer industry they were used to convert the random output of computer data to a standard television format.
A few examples: http://www.r-type.org/exhib/acl0001.htm http://lampes-et-tubes.info/sc/sc040.php?l=e
And Tektronix even made them: https://w140.com/tekwiki/wiki/T7950
If I can recall the scan converters we used in our data displays were made by Princeton Research.
As for image systems in vintage aircraft, during the 70’s I was exposed to aircraft that actually had film maps that were stored in cartridges and used in the mapping displays in the cockpits. The entire display assembly was able to move the film map through any orientation including rotation. As the aircraft moved, the gyrocompass system would control the film image to coincide with the aircraft location thereby giving the pilot a real time position display. Different film cartridges were inserted on the flightline before a sortie to provide detailed mapping of the area of operation.
Greg
|
|
What sort of engine are we talking about here? Wow.
toggle quoted messageShow quoted text
On Sat, Jan 22, 2022 at 9:24 PM stevenhorii <sonodocsch@gmail.com> wrote: Greg,
I actually have a couple of these early moving map display things. At one time, I actually had a display that took the film cassettes. You noted that it would move the map to correspond to the aircraft's motion. The mechanism to do that was quite something. It had all manner of servo/synchro systems to scroll the map bidirectionally and then other mechanical movements to shift it. It was a masterpiece of mechanical engineering. The units I have now are semi-digital. They still take a film cassette (these are older - the newer ones took a DVD and the latest I presume use some sort of solid-state storage) and one I believe likely has optics and an image sensor (probably a CCD) though I had one that had a flying spot scanning system in it. It used a tiny CRT to generate the flying spot. These units I have are just to generate a map image; I presume it used a digital system to provide the rotated and translated image since the cockpit displays are now flat panel ones. The one that had the film and optics in it along with the display even had a system that could switch the projection bulb in case the one in use burned out.
The highest resolution CRT I ever saw (and had) was from a film recording system made by Celco. It was a magnetic deflection tube. The tube was made by Litton but the precision deflection coils were made by Celco. It had some incredibly small spot size (measured in microns) but was not a full-frame display. It basically scanned a line at a time and would build up an image on the film. There was a color filter system to generate the colored images. I had the display and the electronics box, but no cables. I bought it at a GSA sale in a large lot. I also got a Tek 556 in that lot (and a Mars Mariner rocket engine, but that's a long story). I called Celco and they told me these systems were basically custom made. The cables were not stock but if I wanted a set, they would make them up for $5K. The system sat in storage until I got a call from the R/Greenberg folks who do a lot of titles for movies. They were looking for spare parts for their system and they bought it from me.
I had an interesting system made for NASA. It had a pair of 16mm film projectors that projected images onto two small screens. The screens were surrounded by buttons. It was called a "multiparameter display". The films that were in it had various images, but some were images of meter faces. The system also had a pair of galvanometers that could project pointers onto the screens. This way, the screen became an analog meter and the scales could be changed depending on what meter face was displayed. Other displays were pages from manuals or menus with choices that lined up with the buttons so you could use them to select functions. It was some sort of prototype - I don't think NASA ever flew it, but it was a neat idea. It had a pair of large multipin circular connectors on the back, presumably to connect to the computer system or other processor that would run it.
On Sat, Jan 22, 2022 at 1:42 PM Greg Muir via groups.io <big_sky_explorer= yahoo.com@groups.io> wrote:
In the 60’s and early 70’s there were scan converter units that consisted of a single tube looking like two CRTs assembled face-to-face. One half of the tube was actually a CRT and the other half had the makings of an image tube used in a television camera. The input side would scan to a target centered between the two halves and the other side would scan the target from the backside at its own scan rate to extract what was written. These units got heavy use in the television industry to convert from one video standard to another. In the computer industry they were used to convert the random output of computer data to a standard television format.
A few examples: http://www.r-type.org/exhib/acl0001.htm http://lampes-et-tubes.info/sc/sc040.php?l=e
And Tektronix even made them: https://w140.com/tekwiki/wiki/T7950
If I can recall the scan converters we used in our data displays were made by Princeton Research.
As for image systems in vintage aircraft, during the 70’s I was exposed to aircraft that actually had film maps that were stored in cartridges and used in the mapping displays in the cockpits. The entire display assembly was able to move the film map through any orientation including rotation. As the aircraft moved, the gyrocompass system would control the film image to coincide with the aircraft location thereby giving the pilot a real time position display. Different film cartridges were inserted on the flightline before a sortie to provide detailed mapping of the area of operation.
Greg
|
|
If you are interested in creation of images and effects on CRTs, go to the site" scanimate.com". The Scanimate is an analog video effects generator capable of some amazing work. A person in Asheville, NC has collected what is believed to be the last two of them in existence and restored them to operation.
Bruce Gentry, KA2IVY
toggle quoted messageShow quoted text
On 1/23/22 4:16, cheater cheater wrote: What sort of engine are we talking about here? Wow.
On Sat, Jan 22, 2022 at 9:24 PM stevenhorii <sonodocsch@gmail.com> wrote:
|
|
More interested in computation than sfx :) On Sun, Jan 23, 2022 at 4:35 PM greenboxmaven via groups.io <ka2ivy=verizon.net@groups.io> wrote: If you are interested in creation of images and effects on CRTs, go to the site" scanimate.com". The Scanimate is an analog video effects generator capable of some amazing work. A person in Asheville, NC has collected what is believed to be the last two of them in existence and restored them to operation.
Bruce Gentry, KA2IVY
On 1/23/22 4:16, cheater cheater wrote:
What sort of engine are we talking about here? Wow.
On Sat, Jan 22, 2022 at 9:24 PM stevenhorii <sonodocsch@gmail.com> wrote:
|
|
Fascinating stuff, Steve!
Back in the late 1980's I did my senior project in EE at Case Western Reserve U for some doctors at the Cleveland Clinic. We took analog EEG signals from a strip chart recorder (IIRC) and displayed them on a scope. We had counters to display the first line forward, the third in reverse, the fifth forward, etc. Until we got to the fifteenth line and then started the even numbered lines 16, 14, 12, etc. back up the screen, to minimize any retrace time.
Jim Ford, now in Southern California
toggle quoted messageShow quoted text
------ Original Message ------ From: "stevenhorii" <sonodocsch@gmail.com> To: TekScopes@groups.io Sent: 1/22/2022 10:23:36 AM Subject: Re: [TekScopes] [OT] CRT + Vidicon = analog convolution? The engineers who were looking at template matching in Fourier space were certainly aware of what the defense guys were doing (that was not classified), but they were interested in fairly “mundane” applications by comparison. They were looking at the technique for automated character recognition. This is still difficult which is why sites use CAPTCHA - those “I’m not a robot” things. Character recognition against a noisy background is still a difficult challenge. There was some interest in looking at the Fourier transforms of lung disease as many of the non-focal lung diseases result in very fine (think high spatial frequency) detail changes in the lungs. An optical Fourier transform is effectively done at light speed and you can at least see the magnitude. Spectral displays are still currently used in ultrasound. Doppler signals from moving blood carry information besides the usual velocity signal. Turbulence caused by narrowed vessels results in a wider Doppler spectrum. This can be displayed as the actual spectrum but more commonly as a “third dimension” in addition to time and velocity - the waveforms are given a grayscale value that represents the integrated spectrum. Brighter grayscale values represent a broader spectrum and so are indicators of turbulent flow. In 2D blood flow images, the velocities can be coded in color. Turbulence looks like colored confetti; laminar flow, not so much. Experienced sonographers and the physicians who interpret these studies can also usually hear turbulence by listening to the Doppler signal.
I’d bet that submarine sonar operators have ways of listening and looking for turbulence.
Low-pass filtering to generate an “unsharp mask” was used to sharpen images (photographers did this) by subtracting the unsharp image from the one you wanted to sharpen. This could be done in an analog fashion in an x-ray darkroom, but was cumbersome. Digital image processing completely replaced that. There was an analog subtraction system used for many years. In angiography, the goal is often to see the blood vessels and not the stuff around them. So an image was taken of the anatomical area of interest before contrast injection, then a series of contrast images were taken (the equipment for that was quite something to see -and hear - in operation; imagine zipping five 14 x 14-inch films per second through a film handling device). In the darkroom, the darkroom technician would first make a negative of the non-contrast image. It was then sandwiched with the contrast image and the result was sandwiched with a piece of unexposed film and the whole stack was exposed in a special light box. The result was “subtraction” of the tissues that did not contain contrast leaving the vessels standing out. All analog. I think that early systems did use a storage tube for this in the angigraphic room - the fluoroscopic video signal was inverted and stored for the subtraction mask then the contrast images and subtraction mask were combined for the display. I am not sure of the electronics for this but basically, the two images were added. Again, digital techniques rapidly replaced this once they could process the images at a reasonable speed.
Part of my interest in radiology and why I chose it as a career was because of the intersection of engineering and physics with healthcare. The history of this is fascinating. One of the first attempts at CT-scan like ultrasound was a machine built out of a B-17 ball turret to move the transducer around the patient (well, volunteer; this was never used clinically). Oh, and we had Tek scopes and monitors all over the place. Mostly Tek displays in ultrasound (they were used to display the Doppler waveforms and spectra) and they were favored for many of the diagnostic displays for fluoroscopy. Tek had really excellent display-to-display consistency. The health physics folks and the equipment maintenance people used almost exclusively Tek scopes for troubleshooting and maintenance.
Those were the days, but digital technology has revolutionized medical imaging as it has for so much of what we do now.
Steve H.
On Sat, Jan 22, 2022 at 10:04 stevenhorii via groups.io <sonodocsch= gmail.com@groups.io> wrote:
Optical correlation and convolution are analog methods that have been used for years. We did some of this years ago when examining the modulation transfer function of x-ray imaging systems. We used a combination analog/digital system for this. A scanning microdensitometer for scanning the film image of the knife edge and then the derivative for the line spread function and FFT for the MTF (sort of an abbreviated description). We put a shaft encoder on the pen driving mechanism for the microdensitometer (its output was a plot on paper) to digitize the signal. But I had colleagues who were doing optical deconvolution for noise reduction (provided they knew what the noise properties were though there are some publications on blind deconvolution but those use digital methods) and trying template matching in the Fourier domain for object detection. Using a vidicon to “read” a CRT sounds vaguely familiar to me, such as a tube that did this combining both in a single vacuum envelope.
RCA had a storage tube called a Radechon that was used in early computing systems. It could do things like time base changes and noise reduction through integrating signals. There’s a copy of the Radechon advertisement here:
http://coldwar-c4i.net/SAGE/EE0858-BC.html
Unfortunately, the links don’t work but there are other articles and even the data sheet on the Radechon online. They turn up for sale as well - there is one on eBay now (just search “Radechon”). I’m not the seller and have no conflict of interest with him or her. Oh, and I purchased a Radechon several years ago on eBay. I added it to my collection of old computer stuff. If any of you worked with room-sized computers, many kept a Tek scope on a cart tucked away for use by the service folks. There was one (don’t remember which model, but I think it was a 465) on a cart in the computer room of the 7094 mod 2 I used in college.
Steve H.
On Sat, Jan 22, 2022 at 02:11 cheater cheater <cheater00social@gmail.com> wrote:
> Hi all, > I have recently stumbled upon the idea of using vidicons and CRTs for > analog convolution. Convolution of a continuous incoming signal (say > audio) with a FIR (finite impulse response) is one way of creating > echo, simulating filters, etc. By far the only method of calculating > convolution nowadays is the DFT (discrete Fourier transform), a form > of the FFT (fast Fourier transform). > > I was wondering if anyone used vidicons and can confirm if this scheme > would work. > > The first time I heard of vidicons being used together with CRTs was > when I read up on old TV stations and how they converted from 50 to 60 > Hz or between line amounts. They essentially filmed a TV, but the > device eventually was an integrated CRT-Vidicon invention where the > CRT and Vidicon both scanned horizontally. However, if you make them > scan at 90 degrees of each other, you can use that for computation. > > One way of calculating convolution is as follows. Let's assume for a > second that the signal and the FIR take values from -1 to 1. For every > sample of the signal you have coming in, call that sample s_t, you > start outputting s_t * FIR - you output all samples of the FIR > subsequently, scaled by s_t. So on this sample, you output s_t * > FIR_1, where FIR_1 is the first sample of the FIR. On the next sample, > say s_t+1, you start outputting s_t+1 * FIR, while also continuing to > play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1, > but also s_t * FIR_2. > > Let's say you take a CRT and break it up into, say, a grid of 100 x > 100 points. Scan them horizontally from top left to bottom right. > Start out in the top left. Wait for a sample of the signal, say s_1, > to come in. Horizontally, you start outputting - using the Z channel > (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc. > > Wait for the next sample to come in, and meanwhile move to the 2nd > line, and start on the second point. When sample s_2 comes in, start > outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, > which is s_2*FIR_99. Then wrap around to the start of the line 2 > (without advancing to line 3), and output s_2*FIR_100. > > So on the nth scan line, you start outputting at the nth "pixel". On > the 100th scan line, you start at the 100th pixel. Then you wrap > around to the first line and start on the 1st pixel. > > Next for the Vidicon. Orient it towards the CRT, but at an angle of 90 > degrees. So it should be scanning vertically, top to bottom, left to > right, starting from the top left, and ending on the bottom right. It > has scan columns rather than scan rows. > > For every sample you have coming in, as you are outputting the > relevant row to the CRT, scan a single column. You can start before > the row is done outputting, because all previous rows are done, and as > for the latest row, you only care about the home pixel (the pixel that > is first put on the screen, ie for the nth line it's the nth pixel). > While scanning this, sum up the brightness over your whole scan, and > that's your output. That just calculated a convolution between the FIR > and the incoming signal. It's a 100-point convolution. Cool, right? > > Here's an even cooler thing. Nothing says you have to make the FIR > have as many points as there are rows. You can make the FIR a > continuous signal. Then, you can scan that 100-row image using more > than 100 columns. Say, scan it at 400 columns. This then just means > that the FIR has a bandwidth 4x higher than the signal it's being > applied to. > > And finally, of course you can output different FIR for every incoming > sample (for every incoming row). This would let you eg apply, via > convolution, a parametric filter. > > Regarding practical implementation of the CRT output. For simplicity, > I described above a situation where you're outputting horizontal lines > of variable brightness in a raster. However, to preserve the phosphor, > when increasing the Z (brightness) of the trace, you would also add a > little bit of a high-frequency signal, to make the line thicker. The > vidicon and the subsequent averaging circuit will like it just as > well, and probably even better. This way you can prevent incoming hot > spots. Also it doesn't matter if the lines intersect - they might just > as well all be at the same level. However, then the phosphor might get > more hot spots, and the non-linearity of the phosphor brightness vs > cathode current might come into play. The raster was just to > illustrate the structure of the algorithm, but if you need some form > of accuracy, then you might want to keep lines from intersecting, but > at that point, why are you even doing this using a Vidicon? > > > > > >
|
|
I don’t know if you have seen the output of a modern ECG machine, but they are digital and have on-board computing power. They are actually not much larger (in some cases smaller) than the older analog ones. That computing power means they actually diagnose (or at least characterize) the waveforms and print it on the plots. They also format the plots (typically from twelve leads - the four on your limbs plus six on your chest - but some of the “leads” are calculated, not physical) all the signals on a single page and then a “rhythm strip” which is a continuous recording over several seconds on a second page. The cardiologist usually just reviews the ECG, the characterization reported by the machine, and signs off on it if he or she agrees.
I remember using a caliper to “digitize” an ECG and ran an FFT on it. I didn’t know that this had been tried many times before in an attempt to use the ECG spectrum for diagnosis. The most recent work on this has been using wavelet decomposition of ECGs. Apparently it works for de-noising ECG waveforms.
Do you recall what became of your project at Cleveland Clinic?
Steve H.
toggle quoted messageShow quoted text
On Sun, Jan 23, 2022 at 20:19 Jim Ford <james.ford@cox.net> wrote: Fascinating stuff, Steve!
Back in the late 1980's I did my senior project in EE at Case Western Reserve U for some doctors at the Cleveland Clinic. We took analog EEG signals from a strip chart recorder (IIRC) and displayed them on a scope. We had counters to display the first line forward, the third in reverse, the fifth forward, etc. Until we got to the fifteenth line and then started the even numbered lines 16, 14, 12, etc. back up the screen, to minimize any retrace time.
Jim Ford, now in Southern California
------ Original Message ------ From: "stevenhorii" <sonodocsch@gmail.com> To: TekScopes@groups.io Sent: 1/22/2022 10:23:36 AM Subject: Re: [TekScopes] [OT] CRT + Vidicon = analog convolution?
The engineers who were looking at template matching in Fourier space were certainly aware of what the defense guys were doing (that was not classified), but they were interested in fairly “mundane” applications by comparison. They were looking at the technique for automated character recognition. This is still difficult which is why sites use CAPTCHA - those
“I’m not a robot” things. Character recognition against a noisy background is still a difficult challenge. There was some interest in looking at the Fourier transforms of lung disease as many of the non-focal lung diseases result in very fine (think high spatial frequency) detail changes in the lungs. An optical Fourier transform is effectively done at light speed and you can at least see the magnitude. Spectral displays are still currently used in ultrasound. Doppler signals from moving blood carry information besides the usual velocity signal. Turbulence caused by narrowed vessels results in a wider Doppler spectrum. This can be displayed as the actual spectrum but more commonly as a “third dimension” in addition to time and velocity - the waveforms are given a grayscale value that represents the integrated spectrum. Brighter grayscale values represent a broader spectrum
and so are indicators of turbulent flow. In 2D blood flow images, the velocities can be coded in color. Turbulence looks like colored confetti; laminar flow, not so much. Experienced sonographers and the physicians who interpret these studies can also usually hear turbulence by listening to the Doppler signal.
I’d bet that submarine sonar operators have ways of listening and looking for turbulence.
Low-pass filtering to generate an “unsharp mask” was used to sharpen images
(photographers did this) by subtracting the unsharp image from the one you wanted to sharpen. This could be done in an analog fashion in an x-ray darkroom, but was cumbersome. Digital image processing completely replaced that. There was an analog subtraction system used for many years. In angiography, the goal is often to see the blood vessels and not the stuff around them. So an image was taken of the anatomical area of interest before contrast injection, then a series of contrast images were taken (the
equipment for that was quite something to see -and hear - in operation; imagine zipping five 14 x 14-inch films per second through a film handling device). In the darkroom, the darkroom technician would first make a negative of the non-contrast image. It was then sandwiched with the contrast image and the result was sandwiched with a piece of unexposed film
and the whole stack was exposed in a special light box. The result was “subtraction” of the tissues that did not contain contrast leaving the vessels standing out. All analog. I think that early systems did use a storage tube for this in the angigraphic room - the fluoroscopic video signal was inverted and stored for the subtraction mask then the contrast images and subtraction mask were combined for the display. I am not sure of
the electronics for this but basically, the two images were added. Again, digital techniques rapidly replaced this once they could process the images
at a reasonable speed.
Part of my interest in radiology and why I chose it as a career was because
of the intersection of engineering and physics with healthcare. The history
of this is fascinating. One of the first attempts at CT-scan like ultrasound was a machine built out of a B-17 ball turret to move the transducer around the patient (well, volunteer; this was never used clinically). Oh, and we had Tek scopes and monitors all over the place. Mostly Tek displays in ultrasound (they were used to display the Doppler waveforms and spectra) and they were favored for many of the diagnostic displays for fluoroscopy. Tek had really excellent display-to-display consistency. The health physics folks and the equipment maintenance people used almost exclusively Tek scopes for troubleshooting and maintenance.
Those were the days, but digital technology has revolutionized medical imaging as it has for so much of what we do now.
Steve H.
On Sat, Jan 22, 2022 at 10:04 stevenhorii via groups.io <sonodocsch= gmail.com@groups.io> wrote:
Optical correlation and convolution are analog methods that have been used
for years. We did some of this years ago when examining the modulation transfer function of x-ray imaging systems. We used a combination analog/digital system for this. A scanning microdensitometer for scanning
the film image of the knife edge and then the derivative for the line spread function and FFT for the MTF (sort of an abbreviated description).
We put a shaft encoder on the pen driving mechanism for the microdensitometer (its output was a plot on paper) to digitize the signal.
But I had colleagues who were doing optical deconvolution for noise reduction (provided they knew what the noise properties were though there
are some publications on blind deconvolution but those use digital methods)
and trying template matching in the Fourier domain for object detection.
Using a vidicon to “read” a CRT sounds vaguely familiar to me, such as a
tube that did this combining both in a single vacuum envelope.
RCA had a storage tube called a Radechon that was used in early computing
systems. It could do things like time base changes and noise reduction through integrating signals. There’s a copy of the Radechon advertisement
here:
http://coldwar-c4i.net/SAGE/EE0858-BC.html
Unfortunately, the links don’t work but there are other articles and even
the data sheet on the Radechon online. They turn up for sale as well - there is one on eBay now (just search “Radechon”). I’m not the seller and
have no conflict of interest with him or her. Oh, and I purchased a Radechon several years ago on eBay. I added it to my collection of old computer stuff. If any of you worked with room-sized computers, many kept a
Tek scope on a cart tucked away for use by the service folks. There was one
(don’t remember which model, but I think it was a 465) on a cart in the computer room of the 7094 mod 2 I used in college.
Steve H.
On Sat, Jan 22, 2022 at 02:11 cheater cheater < cheater00social@gmail.com>
wrote:
> Hi all, > I have recently stumbled upon the idea of using vidicons and CRTs for > analog convolution. Convolution of a continuous incoming signal (say > audio) with a FIR (finite impulse response) is one way of creating > echo, simulating filters, etc. By far the only method of calculating > convolution nowadays is the DFT (discrete Fourier transform), a form > of the FFT (fast Fourier transform). > > I was wondering if anyone used vidicons and can confirm if this scheme
> would work. > > The first time I heard of vidicons being used together with CRTs was > when I read up on old TV stations and how they converted from 50 to 60
> Hz or between line amounts. They essentially filmed a TV, but the > device eventually was an integrated CRT-Vidicon invention where the > CRT and Vidicon both scanned horizontally. However, if you make them > scan at 90 degrees of each other, you can use that for computation. > > One way of calculating convolution is as follows. Let's assume for a > second that the signal and the FIR take values from -1 to 1. For every
> sample of the signal you have coming in, call that sample s_t, you > start outputting s_t * FIR - you output all samples of the FIR > subsequently, scaled by s_t. So on this sample, you output s_t * > FIR_1, where FIR_1 is the first sample of the FIR. On the next sample,
> say s_t+1, you start outputting s_t+1 * FIR, while also continuing to > play back s_t * FIR. So on the second sample you output s_t+1 * FIR_1,
> but also s_t * FIR_2. > > Let's say you take a CRT and break it up into, say, a grid of 100 x > 100 points. Scan them horizontally from top left to bottom right. > Start out in the top left. Wait for a sample of the signal, say s_1, > to come in. Horizontally, you start outputting - using the Z channel > (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc. > > Wait for the next sample to come in, and meanwhile move to the 2nd > line, and start on the second point. When sample s_2 comes in, start > outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line, > which is s_2*FIR_99. Then wrap around to the start of the line 2 > (without advancing to line 3), and output s_2*FIR_100. > > So on the nth scan line, you start outputting at the nth "pixel". On > the 100th scan line, you start at the 100th pixel. Then you wrap > around to the first line and start on the 1st pixel. > > Next for the Vidicon. Orient it towards the CRT, but at an angle of 90
> degrees. So it should be scanning vertically, top to bottom, left to > right, starting from the top left, and ending on the bottom right. It > has scan columns rather than scan rows. > > For every sample you have coming in, as you are outputting the > relevant row to the CRT, scan a single column. You can start before > the row is done outputting, because all previous rows are done, and as
> for the latest row, you only care about the home pixel (the pixel that
> is first put on the screen, ie for the nth line it's the nth pixel). > While scanning this, sum up the brightness over your whole scan, and > that's your output. That just calculated a convolution between the FIR
> and the incoming signal. It's a 100-point convolution. Cool, right? > > Here's an even cooler thing. Nothing says you have to make the FIR > have as many points as there are rows. You can make the FIR a > continuous signal. Then, you can scan that 100-row image using more > than 100 columns. Say, scan it at 400 columns. This then just means > that the FIR has a bandwidth 4x higher than the signal it's being > applied to. > > And finally, of course you can output different FIR for every incoming
> sample (for every incoming row). This would let you eg apply, via > convolution, a parametric filter. > > Regarding practical implementation of the CRT output. For simplicity, > I described above a situation where you're outputting horizontal lines
> of variable brightness in a raster. However, to preserve the phosphor,
> when increasing the Z (brightness) of the trace, you would also add a > little bit of a high-frequency signal, to make the line thicker. The > vidicon and the subsequent averaging circuit will like it just as > well, and probably even better. This way you can prevent incoming hot > spots. Also it doesn't matter if the lines intersect - they might just
> as well all be at the same level. However, then the phosphor might get
> more hot spots, and the non-linearity of the phosphor brightness vs > cathode current might come into play. The raster was just to > illustrate the structure of the algorithm, but if you need some form > of accuracy, then you might want to keep lines from intersecting, but > at that point, why are you even doing this using a Vidicon? > > > > > >
|
|
Nope, I don't know what happened with our readout system at the Cleveland Clinic. We didn't get it working completely anyway.Wavelets is one of those areas I wish I could get my head around. So much cool technology, so little time to explore it.... JimSent from my T-Mobile 4G LTE Device
toggle quoted messageShow quoted text
-------- Original message --------From: stevenhorii <sonodocsch@gmail.com> Date: 1/23/22 5:51 PM (GMT-08:00) To: TekScopes@groups.io Subject: Re: [TekScopes] [OT] CRT + Vidicon = analog convolution? I don’t know if you have seen the output of a modern ECG machine, but theyare digital and have on-board computing power. They are actually not muchlarger (in some cases smaller) than the older analog ones. That computingpower means they actually diagnose (or at least characterize) the waveformsand print it on the plots. They also format the plots (typically fromtwelve leads - the four on your limbs plus six on your chest - but some ofthe “leads” are calculated, not physical) all the signals on a single pageand then a “rhythm strip” which is a continuous recording over severalseconds on a second page. The cardiologist usually just reviews the ECG,the characterization reported by the machine, and signs off on it if he orshe agrees.I remember using a caliper to “digitize” an ECG and ran an FFT on it. Ididn’t know that this had been tried many times before in an attempt to usethe ECG spectrum for diagnosis. The most recent work on this has been usingwavelet decomposition of ECGs. Apparently it works for de-noising ECGwaveforms.Do you recall what became of your project at Cleveland Clinic?Steve H.On Sun, Jan 23, 2022 at 20:19 Jim Ford <james.ford@cox.net> wrote:> Fascinating stuff, Steve!>> Back in the late 1980's I did my senior project in EE at Case Western> Reserve U for some doctors at the Cleveland Clinic. We took analog EEG> signals from a strip chart recorder (IIRC) and displayed them on a> scope. We had counters to display the first line forward, the third in> reverse, the fifth forward, etc. Until we got to the fifteenth line and> then started the even numbered lines 16, 14, 12, etc. back up the> screen, to minimize any retrace time.>> Jim Ford, now in Southern California>> ------ Original Message ------> From: "stevenhorii" <sonodocsch@gmail.com>> To: TekScopes@groups.io> Sent: 1/22/2022 10:23:36 AM> Subject: Re: [TekScopes] [OT] CRT + Vidicon = analog convolution?>> >The engineers who were looking at template matching in Fourier space were> >certainly aware of what the defense guys were doing (that was not> >classified), but they were interested in fairly “mundane” applications by> >comparison. They were looking at the technique for automated character> >recognition. This is still difficult which is why sites use CAPTCHA -> those> >“I’m not a robot” things. Character recognition against a noisy background> >is still a difficult challenge. There was some interest in looking at the> >Fourier transforms of lung disease as many of the non-focal lung diseases> >result in very fine (think high spatial frequency) detail changes in the> >lungs. An optical Fourier transform is effectively done at light speed and> >you can at least see the magnitude. Spectral displays are still currently> >used in ultrasound. Doppler signals from moving blood carry information> >besides the usual velocity signal. Turbulence caused by narrowed vessels> >results in a wider Doppler spectrum. This can be displayed as the actual> >spectrum but more commonly as a “third dimension” in addition to time and> >velocity - the waveforms are given a grayscale value that represents the> >integrated spectrum. Brighter grayscale values represent a broader> spectrum> >and so are indicators of turbulent flow. In 2D blood flow images, the> >velocities can be coded in color. Turbulence looks like colored confetti;> >laminar flow, not so much. Experienced sonographers and the physicians who> >interpret these studies can also usually hear turbulence by listening to> >the Doppler signal.> >> >I’d bet that submarine sonar operators have ways of listening and looking> >for turbulence.> >> >Low-pass filtering to generate an “unsharp mask” was used to sharpen> images> >(photographers did this) by subtracting the unsharp image from the one you> >wanted to sharpen. This could be done in an analog fashion in an x-ray> >darkroom, but was cumbersome. Digital image processing completely replaced> >that. There was an analog subtraction system used for many years. In> >angiography, the goal is often to see the blood vessels and not the stuff> >around them. So an image was taken of the anatomical area of interest> >before contrast injection, then a series of contrast images were taken> (the> >equipment for that was quite something to see -and hear - in operation;> >imagine zipping five 14 x 14-inch films per second through a film handling> >device). In the darkroom, the darkroom technician would first make a> >negative of the non-contrast image. It was then sandwiched with the> >contrast image and the result was sandwiched with a piece of unexposed> film> >and the whole stack was exposed in a special light box. The result was> >“subtraction” of the tissues that did not contain contrast leaving the> >vessels standing out. All analog. I think that early systems did use a> >storage tube for this in the angigraphic room - the fluoroscopic video> >signal was inverted and stored for the subtraction mask then the contrast> >images and subtraction mask were combined for the display. I am not sure> of> >the electronics for this but basically, the two images were added. Again,> >digital techniques rapidly replaced this once they could process the> images> >at a reasonable speed.> >> >Part of my interest in radiology and why I chose it as a career was> because> >of the intersection of engineering and physics with healthcare. The> history> >of this is fascinating. One of the first attempts at CT-scan like> >ultrasound was a machine built out of a B-17 ball turret to move the> >transducer around the patient (well, volunteer; this was never used> >clinically). Oh, and we had Tek scopes and monitors all over the place.> >Mostly Tek displays in ultrasound (they were used to display the Doppler> >waveforms and spectra) and they were favored for many of the diagnostic> >displays for fluoroscopy. Tek had really excellent display-to-display> >consistency. The health physics folks and the equipment maintenance people> >used almost exclusively Tek scopes for troubleshooting and maintenance.> >> >Those were the days, but digital technology has revolutionized medical> >imaging as it has for so much of what we do now.> >> >Steve H.> >> >On Sat, Jan 22, 2022 at 10:04 stevenhorii via groups.io <sonodocsch=> >gmail.com@groups.io> wrote:> >> >> Optical correlation and convolution are analog methods that have been> used> >> for years. We did some of this years ago when examining the modulation> >> transfer function of x-ray imaging systems. We used a combination> >> analog/digital system for this. A scanning microdensitometer for> scanning> >> the film image of the knife edge and then the derivative for the line> >> spread function and FFT for the MTF (sort of an abbreviated> description).> >> We put a shaft encoder on the pen driving mechanism for the> >> microdensitometer (its output was a plot on paper) to digitize the> signal.> >> But I had colleagues who were doing optical deconvolution for noise> >> reduction (provided they knew what the noise properties were though> there> >> are some publications on blind deconvolution but those use digital> methods)> >> and trying template matching in the Fourier domain for object> detection.> >> Using a vidicon to “read” a CRT sounds vaguely familiar to me, such as> a> >> tube that did this combining both in a single vacuum envelope.> >>> >> RCA had a storage tube called a Radechon that was used in early> computing> >> systems. It could do things like time base changes and noise reduction> >> through integrating signals. There’s a copy of the Radechon> advertisement> >> here:> >>> >> http://coldwar-c4i.net/SAGE/EE0858-BC.html> >>> >> Unfortunately, the links don’t work but there are other articles and> even> >> the data sheet on the Radechon online. They turn up for sale as well -> >> there is one on eBay now (just search “Radechon”). I’m not the seller> and> >> have no conflict of interest with him or her. Oh, and I purchased a> >> Radechon several years ago on eBay. I added it to my collection of old> >> computer stuff. If any of you worked with room-sized computers, many> kept a> >> Tek scope on a cart tucked away for use by the service folks. There> was one> >> (don’t remember which model, but I think it was a 465) on a cart in the> >> computer room of the 7094 mod 2 I used in college.> >>> >> Steve H.> >>> >> On Sat, Jan 22, 2022 at 02:11 cheater cheater <> cheater00social@gmail.com>> >> wrote:> >>> >> > Hi all,> >> > I have recently stumbled upon the idea of using vidicons and CRTs for> >> > analog convolution. Convolution of a continuous incoming signal (say> >> > audio) with a FIR (finite impulse response) is one way of creating> >> > echo, simulating filters, etc. By far the only method of calculating> >> > convolution nowadays is the DFT (discrete Fourier transform), a form> >> > of the FFT (fast Fourier transform).> >> >> >> > I was wondering if anyone used vidicons and can confirm if this> scheme> >> > would work.> >> >> >> > The first time I heard of vidicons being used together with CRTs was> >> > when I read up on old TV stations and how they converted from 50 to> 60> >> > Hz or between line amounts. They essentially filmed a TV, but the> >> > device eventually was an integrated CRT-Vidicon invention where the> >> > CRT and Vidicon both scanned horizontally. However, if you make them> >> > scan at 90 degrees of each other, you can use that for computation.> >> >> >> > One way of calculating convolution is as follows. Let's assume for a> >> > second that the signal and the FIR take values from -1 to 1. For> every> >> > sample of the signal you have coming in, call that sample s_t, you> >> > start outputting s_t * FIR - you output all samples of the FIR> >> > subsequently, scaled by s_t. So on this sample, you output s_t *> >> > FIR_1, where FIR_1 is the first sample of the FIR. On the next> sample,> >> > say s_t+1, you start outputting s_t+1 * FIR, while also continuing to> >> > play back s_t * FIR. So on the second sample you output s_t+1 *> FIR_1,> >> > but also s_t * FIR_2.> >> >> >> > Let's say you take a CRT and break it up into, say, a grid of 100 x> >> > 100 points. Scan them horizontally from top left to bottom right.> >> > Start out in the top left. Wait for a sample of the signal, say s_1,> >> > to come in. Horizontally, you start outputting - using the Z channel> >> > (brightness) - the samples s_1*FIR_1, s_1*FIR_2, etc.> >> >> >> > Wait for the next sample to come in, and meanwhile move to the 2nd> >> > line, and start on the second point. When sample s_2 comes in, start> >> > outputting s_2*FIR_1, s_2*FIR_2, until you hit the end of the line,> >> > which is s_2*FIR_99. Then wrap around to the start of the line 2> >> > (without advancing to line 3), and output s_2*FIR_100.> >> >> >> > So on the nth scan line, you start outputting at the nth "pixel". On> >> > the 100th scan line, you start at the 100th pixel. Then you wrap> >> > around to the first line and start on the 1st pixel.> >> >> >> > Next for the Vidicon. Orient it towards the CRT, but at an angle of> 90> >> > degrees. So it should be scanning vertically, top to bottom, left to> >> > right, starting from the top left, and ending on the bottom right. It> >> > has scan columns rather than scan rows.> >> >> >> > For every sample you have coming in, as you are outputting the> >> > relevant row to the CRT, scan a single column. You can start before> >> > the row is done outputting, because all previous rows are done, and> as> >> > for the latest row, you only care about the home pixel (the pixel> that> >> > is first put on the screen, ie for the nth line it's the nth pixel).> >> > While scanning this, sum up the brightness over your whole scan, and> >> > that's your output. That just calculated a convolution between the> FIR> >> > and the incoming signal. It's a 100-point convolution. Cool, right?> >> >> >> > Here's an even cooler thing. Nothing says you have to make the FIR> >> > have as many points as there are rows. You can make the FIR a> >> > continuous signal. Then, you can scan that 100-row image using more> >> > than 100 columns. Say, scan it at 400 columns. This then just means> >> > that the FIR has a bandwidth 4x higher than the signal it's being> >> > applied to.> >> >> >> > And finally, of course you can output different FIR for every> incoming> >> > sample (for every incoming row). This would let you eg apply, via> >> > convolution, a parametric filter.> >> >> >> > Regarding practical implementation of the CRT output. For simplicity,> >> > I described above a situation where you're outputting horizontal> lines> >> > of variable brightness in a raster. However, to preserve the> phosphor,> >> > when increasing the Z (brightness) of the trace, you would also add a> >> > little bit of a high-frequency signal, to make the line thicker. The> >> > vidicon and the subsequent averaging circuit will like it just as> >> > well, and probably even better. This way you can prevent incoming hot> >> > spots. Also it doesn't matter if the lines intersect - they might> just> >> > as well all be at the same level. However, then the phosphor might> get> >> > more hot spots, and the non-linearity of the phosphor brightness vs> >> > cathode current might come into play. The raster was just to> >> > illustrate the structure of the algorithm, but if you need some form> >> > of accuracy, then you might want to keep lines from intersecting, but> >> > at that point, why are you even doing this using a Vidicon?> >> >> >> >> >> >> >> >> >> >> >> >> >>> >>> >>> >>> >>> >>> >> >> >> >> >>>>> >>>
|
|