Topics

Todd's ELF-ish gets some more I/O #ELF #Homebrew

taf123
 

Hi ELF-ish fans!

Now that the ELF-ish base build is finished and we have the first memory expansion board to bring the system up to 4k RAM, it's time to turn our attention to populating the upper half of the expansion chassis with some interesting I/O and support chips.

But, with the space constraints of the board, the old-style, discrete-based RS-232 driver had to go.  I decided to include a simple 4-pin connector between the Byte-I/O ports for the bit banging serial port, and use and external MAX232 based RS-232 driver.



With all of that space, I packed in the sockets for the next stages in expansion.




Some of the upcoming I/O was going to be memory-mapped, I had to more fully decode the upper memory space.  I decided to reserve the 16k from 8000 - BFFF for ROM, of which the first 4k was on the main ELF-ish board.

The remaining 16k would be decoded into 4k blocks, one of which would be further decoded into 1k blocks.

To do this, I needed access to the full MA8 to MA15, so I used another CDP1875 byte-output chip as an address latch.


For implementing the decoding I used both halves of a 4556 Dual Binary to 1 of 4 Decoder.  The first half decodes the upper 16k to 4k blocks, the other half further decodes the 4k starting at C000 into 1k blocks.

The CDP1824 would be moved to the first 1k at at C000.  Yes, I'm burning a full 1k of space for only 32-bytes, but sometimes it's easier not to fully decode.

Here's the updated decoding, CDP1824 and memory-space CDP1856 buffers.


And here's the wiring for these chips.



And ready for testing.



I don't seem to have any screen shots, but you'll have to take my word for it that I could now access the CDP1824 starting at C000.  I updated my UT4 EPROM to to save the registers there instead.



taf123
 

Since the ELF-ish was starting to grow up, I decided to plan for having a more sophisticated monitor ROM at some point.  My idea was that the system should do a power up reset and automatically run the monitor.  In other words, the RUN_U and RUN_P buttons wouldn't be required any more for getting out of RESET into RUN.  But I still wanted single step mode, which would require a RUN button, so the controls in the expansion chassis would need to be re-worked.

For the power on reset, I could use the classic RC circuit.  But I decided to borrow another idea from the ELF2k folks, and use a  DS1233 5V EconoReset.  This provides three useful functions:

  • POR, with a delay of 350ms to allow the power supply and processor to stabilize.
  • Debounces the pushbutton reset, and generates a 350ms reset pulse upon release
  • And monitors the status of the power supply, forcing a reset when out of tolerance.
This last point was also covered by the DS1321 used for the battery backup of the lower 1k of RAM.  However, by setting the tolerances differently, the DS1321 can start preparing for power loss when the power rail is down by 5%, with the DS1233 waiting until it is down by 10% before forcing the reset.

Putting all of this together required a bit of rework of the control circuit, with RUN_P no longer doing anything.



I installed the DS1233 and capacitor down near the RESET button.



Since the RUN_U signal could no longer be used to jam the 8000 for ROM start up, I reworked that circuit back to just supporting the VIP mode.  This required me to pick up the RESET signal and feed it back over to the main chassis.

But I also wanted the RUN toggle to be able set the jamming flip-flop if it was used, so I used a diode/resistor OR function to allow either method.



The resulting SET_MON signal then goes to the jamming ff.


Since there was no RUN_U signal anymore, I had to update UT4 to issue the OUT 4 command to clear the jamming ff, rather than just hoping that the operator will let go of the button.


taf123
 

Before getting on to the first interesting I/O device, I have to correct the upper memory decode story.

The decode schematic I posted previously was as an early attempt which did not work properly.  The problem was that the ROM on the main board was still using the original partial decoding based only upon the state of MA12 to select between the ROM at 8000 and the CDP1824 at the old location of 9000.

With the new decoding scheme, the ROM on the main board has to be disabled when any of the devices in the upper 16k of memory are accessed.  So I removed the 4013 ff latching MA4 as MA12, and instead generated a signal from the decoder to deactivate the ROM by raising the !R4K0 line HI whenever MA14 and MA15 are both HI (upper 16k).  I also included a jumper to disable the decoder in case I wanted to let something else on a future expansion board use this space instead.

Finally, I pre-wired all of the allocated decoded !CS line to the 4068 to activate the CDP1856's.  So here's the fully functional upper memory decoder.


Now that the memory decode is correct, I can get on to I/O expansion.

The first device to be added to the memory-mapped I/O space is not actually an I/O device, but a very important support chip - a CDP1877 Programmable Interrupt Controller.

This device adds support for eight interrupt lines, with priority 7 (highest) to 0 (lowest).  These can be cascaded to add N priority interrupt lines, in increments of 8.

The CDP1877 is programmed to generate a LBR, Long Branch, to the ISR for each priority.  The PIC (or PICs) occupies the complete 4k block starting at E000, with the address of the vector register programmed into R1.

When an unmasked interrupt occurs, the CDP1877 asserts the CDP1802 !INTERRUPT line, which does a SEP to R1 as per normal Interrupt processing.  The CDP1877 then responds with the LBR <ISR address> for the corresponding interrupt.

With the decoding done, the CDP1877 is easy to add.  A set of pull-up resistors is added to ensure un-used IR lines do not cause spurious interrupts.  I did add a jumper to allow the main !INTERRUPT line to be disconnected until I have support for interrupts included in my future super-monitor.




I managed to get the CDP1877 in sexy mil-spec purple ceramic with the metal cap.



Until I'm ready to start writing ISR's, all I could do for testing was to poll the status register and clear any pending interrupts.

taf123
 
Edited

Next up, I intended to add an actual I/O device.  Since this was going to be an I/O-mapped device, I had to bite the bullet and finish converting the system to the full two-level I/O.

First off, I removed all of the single-level N-decoding, except for I64 used to clear the 8000 jamming flip-flop.  This allowed me to use N=1 to latch the I/O level, as per the CDP1853 data sheet design.

I decided at this point to decommission the STG1861 PIXIE graphics system as I was planning for bigger and better things.  A device which steals half the CPU cycles, requires clock sync with the CPU at only 1.79Mhz, throws out a ton of time critical interrupts, and disallows the 3-cycle commands, was just too limiting.

These decisions allowed me to remove the 4028 used for input N-line decoding, as well as freeing up some of the devices used to interface with the STG1861.

Next, I had to update everything else to the two-level I/O.  First, I decided to name two-level decoded N-lines using a scheme which included both the I/O level and the N-line decode value.  For example, signal IO0-5 is the N=5 when the latched I/O bit 0 is set (so level 0x01).

With what was coming, all level 0x01 N-decode values, 2 through 7, would be used.  So I pre-wired these to the 4078 to enable the CDP1857's.  Here's the updated two-level I/O decode.



I then had to rewire the remaining I/O devices to use the new two-level decodes instead.  For example, I used IO0-5 for the HEX displays and Data switches



Phew.  With that done, I could get on with adding the next I/O device.

I was really getting fed up with the 300 baud limit of UT4's bit banging interface and wanted to move into the '80s with a 9600bps serial connection.  Plus, I wanted to free up the Q and !EF4 line for other uses.  And critical software timing loops go out the window when ISRs are being used.  So, enter the CDP1854 UART.

The CDP1854 can operate in one of two modes.  With the MODE pin tied LO, it is directly compatible with the then industry standard UART’s such as the TR1602A and CDP6402.  Yawn.

But with the MODE pin tied HI, it operates in the CDP1802 I/O space and can be programmed and operated through register reads and writes, either with !EFx polling or interrupt driven.  This is how I planned to use it.

It does need an external clock generator.  A standard option at the time would be something like the MC14411, which used a crystal controlled internal oscillator and would generate several standard bit rates at various pins.  This would usually require jumper settings to select the clock rate one wanted.

I wanted something more software controllable, and preferably from the CDP18xx line.  I decided to go with the CDP1863 8-bit Programmable Frequency Generator.  This does require an external clock source, but why not a buffered version of the CPU clock?  By using a 2.4576 Mhz crystal for the CPU, the CDP1863 can generate all of the standard bit rates up to 9600bps.  Perfect!

Well, almost.  I was a bit concerned by the distance this chip would be away from the CPU.  Looking at the buffered clock pulse with the old 'scope at that distance made me decide to use a 74HC14 Schmitt Trigger to reshape it.

I added another 4-pin connector to allow this to be used together with the bit banging serial interface during software development.

The CDP1854 uses two N-decodes (two I/O ports), one to select the Control/Status Registers and one the transmitter/receiver data registers.  It is intended to decode the N-lines directly, with N0 going to the RSEL (Register select line), and N1 and N2 going to two of the three CS lines.  By connecting N1 to CS1 and N2 to !CS2, the CDP1864 responds to N=2 and N=3.

It also has the additional CS3 line.  In single-level I/O operation, this would just be tied HI.  But in two-level, the CDP1854 should only respond to N=2 and N=3 when the correct I/O level has been selected.  So I tied the IO0 level line to the CS3 input.  Phew.  This method will be repeated for other devices which are designed to do their own N-line decoding.

Here's the combined CDP1854 and 1863 schematic snippet:


Note that there is not a jumper to select between !EFx polling vs. interrupt operation since polling is performed by reading the status register.





Actually, it looks like these pictures were taken before the 74HC14 was added, as it occupies that empty space right in the middle.



And ready for testing.



I wrote a very simple, quick and dirty test program which would write "HELLO WORLD" and then read any input character and just echo it back.



Welcome to 9600bps! Yes.

cmdrcosmac
 

Todd,
 What an impressive build! It's a BEAST!!
 A few thoughts...
 Some of those old 1800-series and 4000-series chips were a lot slower than modern 74HC stuff. Also the 2101's.
Those things ran 200-400 ns. Modern RAM at 35-70 ns. is common and cheap. I don't know how fast you intend to run,
but you may encounter some bottlenecks.
 
 Having the Elf run the monitor at reset is a mixed blessing. Yes, a reset gets you back in control, but then you
have to type the $Pxxxx or whatever to run the application. And you have to assemble your code at somewhere above
the Monitor. If you have a dump and no source it can be a hassle. The RUN_U/RUN_P is a great convenience.
In Ipso Facto No. 21 is shown a circuit rhat will force a run to monitor at any address. It's simpler and more flexible
than the JAM_8000 scheme and needs no mod to the code.

 I feel your pain with the 300 Baud. When I got my Super Elf I replaced the clock with a can oscillator and an HC4040
and ran at 4.91MHz. IDIOT ran at 4800 Baud. I was happy. Then David Madole released serial routines that ran at 19.2kBaud.
Had to have it!. First I replaced the clock to get the standard 1.78 MHz clock. I then found I couldn't get more than
1200 Baud out of IDIOT.  So I patched David's code into IDIOT. What I did was a bit crude, but when it worked I set the
CPU clock to 3.57 MHz and the terminal to 38400 Baud and it flies. !M loads go at least 10 times faster than before.
If you have to load up something like FIG-Forth, you'll want every Baud you can get.
 You can get a lot more than 9600 Baud out of the 1854. And it should not be hard to patch UT4 to use the UART.

-Chuck

taf123
 

Hi Chuck -

On Fri, May 10, 2019 at 11:23 PM, cmdrcosmac wrote:
Todd,
 What an impressive build! It's a BEAST!!
Thanks - she'll get beastier still.

 A few thoughts...
 Some of those old 1800-series and 4000-series chips were a lot slower than modern 74HC stuff. Also the 2101's.
Those things ran 200-400 ns. Modern RAM at 35-70 ns. is common and cheap. I don't know how fast you intend to run,
but you may encounter some bottlenecks.
Thanks.  My design approach has been to use as many of the CDP1800 series as I could source - and I managed to get just about the complete set.

Deciding to use CDP1821's for the second 1k was the most expensive and time consuming method I can think of to achieve 1k x 8-bit.  But it was very satisfying when it call came together.

With a maximum clock rate of 3.2Mhz for the 1802AC, that makes a period of 312.5 ns.  Using the CPU clock to derive the UART clock, I'm stuck at 2.4576Mhz with a period of 406.9 ns.

With 8 periods per 1802 cycles, things aren't moving very fast ;-)

It is a valid point though, and as the system sprawl grows, I do end up making some changes to faster 74HC devices in places.  But that's to come.

 
 Having the Elf run the monitor at reset is a mixed blessing.
True.  I haven't really decided which is best.  The nice thing about the wire-wrap approach is that I can always change my mind (again!).

In Ipso Facto No. 21 is shown a circuit rhat will force a run to monitor at any address.
Thanks for the tip.  Looks like there is a lot of good stuff crammed in that series - I'll have to spend some quality time with the archive.

 I feel your pain with the 300 Baud.
It's one of the reasons I put battery backup on all of the memory - so I don't have to reload the programs.

You can get a lot more than 9600 Baud out of the 1854.
The CDP1854 can support serial rates up to 200kbps, although I'm sure that would need to be in a synchronous mode.

The limitation here is my choice of using the CDP1863 to generate the Tx and Rx clocks.  Since the CPU clock is > 2Mhz, I have to use the CLK2 input, which has a /8 pre-scalar, and it also has a final /2 on the output. Finally, the CDP1854 clock has to be 16x the serial rate.  So, the maximum I can run it at is 2.4576Mhz / 8 / 2 /16 = 9600.

That's a 32x increase over what I was getting, so it's an acceptable starting point.

And it should not be hard to patch UT4 to use the UART.
Yep, that's my plan.  But I find I spend most of my available hobby time on the design and build and not so much on the software so far.  But it's a coming.

Best regards,
Todd

taf123
 

With the UART finished, I decided to add a CDP1879 Real-Time Clock.

This fits into the memory-mapped I/O space very nicely, so I put it at C400.

The data sheet has very useful ideas which I've incorporated.  First, it supports standby time-keeping down to 2.5V with the use of an external oscillator running at  32.768kHz, and is nice enough to supply the details for one based upon the 74HC04 hex inverter.

Again, I decided to use the DS1321 to handle battery cut-over.

Also, the data sheet shows an interesting application where you can stop the CPU clock to put the system into a low power mode, and then use the RTC's alarm capability to restart at a set time.  I decided to incorporate this as well.



The control of the CPU clock uses two 40107 open drain NAND gates.  The first is shown above in the RTC circuit.  The second one is installed over on the main board next to the CPU clock, where it can stop it by holding the !XTAL line LO.



The design in the data sheet uses the Q line to stop the clock.  But since I'm still using my Q line for the UT4 serial interface (until I update the code to use the freshly installed CDP1854 UART), I decided to use one of the lines from the two-level I/O latch to do this instead.



The CDP1879 is there in the upper middle, surrounded by it's support chips and the oscillator.





And the test run



Since the RTC sits in memory-mapped I/O space, I can access it directly from UT4 with memory reads and writes.



The first write stops the clock.  The second sets the time; ss mm hh dd MM.

The hour encoding includes the 12-hour am/pm flag, and can be in 24 hour format.  So it was 19:35:00 on Oct 28th when I did this test.

The next write starts the clock.  The following reads show the clock running.

I then shut off the power for about an hour, turned it all back on, and checked the clock.



looks like it's keeping time.  Yay.

These cell batteries I'm using work great for the CMOS SRAM in standby.  But since the 32kHz oscillator has to keep running, this circuit eats these cells.  I'll re-wire for larger batteries eventually.

When I did a longer test run, I found the time had slipped quite a bit.  So, it was time to search eBay for a used frequency counter so I could check and adjust that oscillator.

Sure enough, it was running fast, at 41.6kHz



At the maximum range on the trimmer cap, the closest I could get was 34.1kHz



So I put a 39pF cap in parallel to the trimmer



This brought my oscillator much closer to where it should be at 32.771kHz.



If you're going to play with oscillators, a frequency counter is a must, and there are some very good deals out there.

I still haven't gotten around to testing the CPU stop feature yet, so that will be a future installment.

The RTC has additional features as well.  It can interrupt on an alarm, or at regular set intervals.  It can also output a square wave based upon the internal counters.


taf123
 

This next device is a departure both from the CDP1800 series and from the CMOS technology I've been leaning towards so far.

It's an AM9511A-1DC Arithmetics Processing Unit, or APU, by AMD.  The original AM9511 was from 1978, and my AM9511A-1DC is (c)1979, so it's period.

The device handles signed 16- or 32-bit fixed point and 32-bit floating point add, subtract,  multiply and divide.  Plus it has 11 "derived" functions, such as square root, trig functions like sine, log and exp, and power.

There are also some data manipulation functions, such as converting from fixed to floating, or vice versa.

And all of this in a 24.6 package.

The device is stack oriented.  First, you have to use two or four 8-bit pushes to load one operand, then another two or four to load the other operand.  The command is then issued and the APU processes.  When it finishes, it asserts the !END line and the answer can be popped off the top of stack.  Or, another operand can be pushed and another operation begun.

Depending upon the operation, it may take anywhere from 10 - 20 cycles, to several thousand for some of the derived functions.

The APU also has a !PAUSE line to make the CPU wait if it is not ready for access.

It all looks straight forward to add to the ELF-ish in memory-mapped I/O space.  I placed it at CC00.  Only two addresses are needed; CC00 for the data and CC01 for the command and status register.

One little gotchya is that it's an older style NMOS, requiring both +5V and +12V supplies.  So I decided to feed the ELF-ish 5V regulator with a 12V supply instead of the original 9V, and then patch over the 12V supply directly to the APU.

It needs a clock, with the AM9511-1DC version supporting up to 3Mhz, so I use a buffered version of the 2.4576Mhz CPU clock.


I can't remember why I added the pull-up resisters to the buffered memory bus, except perhaps since the data sheet says that VOH was a minimum of 3.7V.  Although that looks like under TTL load conditions, so it probably should have been just fine driving CMOS.



There it is in the upper right, with the connector for the +12V feed.



So, did it work? Again, since it's memory-mapped, I can access it directly from UT4 using memory read and write commands.

With my first attempt to access it, the ELF-ish just locked up and needed to be reset.

Time to break out the logic analyzer and see what's going down in groove town.






This was a simple attempt to read the status register at CC01.

What we see here is standard CDP1802 execute of a memory read.  The !CS, which is decoded from the high addresses, occurs near the end of timing pulse TPA, with !MRD being asserted near the middle of TPA.

MA0 only takes on the value for the lower byte, the 01, after TPA.  So what we have here is a timing problem.  At the time that !CS and !MRD are asserted, A0 = 0, so the device thinks this is a data read (POP from the stack), which it is slow at due to the internal stack access time, so it asserts the !PAUSE line to make the CPU wait.  The CPU !WAIT line asserts a moment later due to the propagation delay in the control gating.

But, the APU never de-asserts !PAUSE, so we're waiting forever.  The timing diagram in the data sheet does show that !CS should occur at least 25ns before !MRD is asserted, which we have clearly violated here since !MRD is coming before !CS.

The data sheet doesn't say what happens if this is violated, but a separately available manual says this will cause the AM9511 to malfunction - which I guess we could say we're seeing here.

So I need to delay the !MRD signal.  If I gate it with the !CS signal, that solves the malfunction issue, but the !MRD would still be occurring before the A0 has taken on its value from the lower byte of the address, so an access to the Data register may occur when we wanted an access to the Control or Status register.

In other words, I need to gate !MRD with a sufficiently delayed version of the !CS signal.

I know there are better ways to do this, but I'm out of space on the board for any more chips, so I just used the some of the spare gates sitting around to compound propagation delays.  This ugly hack is what I ended up with.



Here's the signals with this change in place.



The key is that !MRD is now happening well after !CS, at the same time as A0 is asserting high.  According to the timing in the data sheet, the minium hold time between C/!D (which A0 is connected to) and !MRD is 0ns, so it's okay for these to happen together.

There is still a brief assertion of the !PAUSE line - the AM9511 pauses for every access, even if just briefly as in this case.

But then the read occurs and everything proceeds.

Here's a screen shot of the UT4 basic APU test.



The plan is to do a 16-bit fixed point add of 0x1234 with 0x5678, which should equal 0x68AC.

The first four lines push the operands onto the internal stack, at CC01, LSB first.

I then issue the add command by writing 6C to the command register at CC01.

I then pop the two byte result off the stack, MSB first.

Looks like success to me!  Yay!

One more signal trace for all of the nerds in the audience.  This is captured between issuing the 6C add command, and the APU asserting the !END line.



Issuing a command is a write cycle.  The CDP1802 asserts the !MWR much later in the execute cycle, so there wasn't any timing issues here.  The APU starts executing the command when !MWR de-asserts, as shown by the blue line.

Then, a couple machine cycles later, the APU finishes and pulses the !END line.  If that was tied to !INTERRUPT (or one of the !IRx lines of the CDP1877 PIC), an ISR could process the results.

I'm happy with these results.  If you're interested in this APU, or floating point maths in general, and all kinds of AM9511 and AM9512 gory details, check out the extended manual.

I found a copy at http://ep.homeserver.hu/PDF/AM9511A-9512.pdf.





Lee Hart
 

taf123 wrote:
This next device is a departure both from the CDP1800 series and from
the CMOS technology I've been leaning towards so far.

It's an AM9511A-1DC Arithmetics Processing Unit, or APU, by AMD. The
original AM9511 was from 1978, and my AM9511A-1DC is (c)1979, so it's
period.
That's a pretty powerful chip; both in calculating and in power consumption. :-) It will take some work to adapt something like BASIC or FORTH to use it for its math routines.

There is a CMOS alternative. RCA made the CDP1855 multiply/divide unit. It's less powerful than the 9511, but CMOS and part of the 1802 family. They are hard to get, but I have one somewhere.

In case I haven't mentioned it before, I'm mightily impressed by your ELF-is computer. You have also spent as much time documenting it as building it!

--
Fools ignore complexity. Pragmatists suffer it. The wise avoid it.
Geniuses remove it. -- Alan Perlis, "Epigrams on Programming"
--
Lee Hart, 814 8th Ave N, Sartell MN 56377, www.sunrise-ev.com

cmdrcosmac
 

Hi Todd,

>> It's one of the reasons I put battery backup on all of the memory - so I don't have to reload the programs.

Note that a common bug when programming is a runaway stack, wherein the INC STACK and DEC STACK's aren't balanced.
This causes a stack crawl and may overwrite the program. Time to reload. I have an EEPROM to save stuff in.
The Elf and IDIOT's !M load routine is slow enough that the chip's byte write mode works just like RAM. I then
disable write with a jumper and block-move the code into RAM.

>> And it should not be hard to patch UT4 to use the UART.
 Since you have the UART, that's probably the easiest. But keep the Q/EF wiring in place as most 1802 software uses
bit-banged serial I/O.

 As for Monitors, most folks appear to be using UT4, IDIOT, Chuck Yakym's MCSMP, or ELF2K. The first 3 have similar
functions. ELF2K supports a disk operating system that can run on CF card "disks". MCSMP loads and saves I8hex files.
IDIOT and UT4 use RCA's !M format. MCSMP will run 9600 Baud at a CPU clock of 1.8 MHz. I use IDIOT, and an assembler that
creates RCA-compatible list files that IDIOT (or UT4) can load. IDIOT is run-time relocatable, and will find RAM to
save the registers. UT4 needs to be assembled with a RAM location coded in. UT4 supports JAM_8000 RUN_U, IDIOT does not.
 I am redesigning my Elf expander to locate IDIOT at F800, an EEPROM at 8000 or C000, and the rest RAM. RUN_U will use
the scheme in Ipso Facto.

 I like the labels on the underside of the chip sockets. How are these done??

>>  My design approach has been to use as many of the CDP1800 series as I could source
>> - and I managed to get just about the complete set.

Where did you get the 1863 frequency divider??
Please tell us more about the Logic Analyzer!

Best,
Chuck

taf123
 

Hi Lee -

On Sat, May 11, 2019 at 11:12 AM, Lee Hart wrote:
That's a pretty powerful chip; both in calculating and in power
consumption. :-) It will take some work to adapt something like BASIC or
FORTH to use it for its math routines.
Yep, I'll have to figure out how to interact with it properly.  I'm slower at looking into the SW compared to the ever expanding HW.

There is a CMOS alternative. RCA made the CDP1855 multiply/divide unit.
It's less powerful than the 9511, but CMOS and part of the 1802 family.
They are hard to get, but I have one somewhere.
I also have some CDP1855's, which I'll probably add in here somewhere eventually.  I was originally going to use these when I stumbled over the AM9511.

In case I haven't mentioned it before, I'm mightily impressed by your
ELF-is computer. You have also spent as much time documenting it as
building it!
Thanks.  Hope you've enjoyed it as much as I have.

Regards,
Todd

taf123
 

Hi again Chuck -

On Sat, May 11, 2019 at 11:58 AM, cmdrcosmac wrote:
>> It's one of the reasons I put battery backup on all of the memory - so I don't have to reload the programs.

Note that a common bug when programming is a runaway stack, wherein the INC STACK and DEC STACK's aren't balanced.
This causes a stack crawl and may overwrite the program. Time to reload. I have an EEPROM to save stuff in.
The Elf and IDIOT's !M load routine is slow enough that the chip's byte write mode works just like RAM. I then
disable write with a jumper and block-move the code into RAM.
That's a great idea - will work that into the sprawl.

>> And it should not be hard to patch UT4 to use the UART.
 Since you have the UART, that's probably the easiest. But keep the Q/EF wiring in place as most 1802 software uses
bit-banged serial I/O.
Good point.  There's no reason for me to remove any of that.

 As for Monitors, most folks appear to be using UT4, IDIOT, Chuck Yakym's MCSMP, or ELF2K. The first 3 have similar
functions. ELF2K supports a disk operating system that can run on CF card "disks". MCSMP loads and saves I8hex files.
IDIOT and UT4 use RCA's !M format. MCSMP will run 9600 Baud at a CPU clock of 1.8 MHz. I use IDIOT, and an assembler that
creates RCA-compatible list files that IDIOT (or UT4) can load. IDIOT is run-time relocatable, and will find RAM to
save the registers. UT4 needs to be assembled with a RAM location coded in. UT4 supports JAM_8000 RUN_U, IDIOT does not.
 I am redesigning my Elf expander to locate IDIOT at F800, an EEPROM at 8000 or C000, and the rest RAM. RUN_U will use
the scheme in Ipso Facto.
Thanks for the monitor comparison.  I need to spend some quality time looking over the options.  I'll probably borrow from everything and cobble something together.

I really want to be able to load the Intel HEX dump fromat from the assembler directly, rather than having to use the UT4 !M interface for that.

 I like the labels on the underside of the chip sockets. How are these done??
I'm using the plastic Wrap-ID's where I could get them.  But they are expensive nowadays, so I won't be buying more.



The paper ones are just done up in Inkscape, and then stuck in place with a piece of two-sided tape - I have a roll of very narrow tape for this.

This is just a by-product of my layouts.  I design a full detail layout in Inkscape, and then also flip it over (reversing all of the text again) so I have both a TOP and a BOTTOM view.

For example, from the VIS build:





Printing an extra copy of the bottom view gives me the labels to cut out.


>>  My design approach has been to use as many of the CDP1800 series as I could source
>> - and I managed to get just about the complete set.

Where did you get the 1863 frequency divider??
Just did a search on eBay and it popped up from someone in China.  He seems to still have some available.  They are used though, but mine worked fine.

Please tell us more about the Logic Analyzer!
Just like you can get very cheap scan converters today, you can get very check logic analyzers.  They are built using a little MCU development board, but with Logic Analyzer FW added.

This one I found on amazon.co.uk (I'm in the UK) from Hobby Components Ltd for about £16 (~ $20). 

https://www.amazon.co.uk/dp/B06Y2C3XVH/ref=pe_3187911_189395841_TE_3p_dp_1

The board is basically just and interface card - all of the processing happens on the laptop using the open source package sigrok.

https://sigrok.org/

Nothing fancy, but get's the job done and cheaply.

Cheers,
Todd

taf123
 

I've decided to make some changes to the ELF-ish before working on additional expansion.  Firstly, I changed my mind about POR into the monitor vs. RUN_U/RUN_P.  I've decided that I prefer RUN_U/RUN_P but that there are occasions when I'd like to be able to POR to monitor in VIP style.  So, I decided to allow both, using jumpers to select the method.

So I rebuilt the RESET latch and the RUN_P de-bouncer, but added an additional jumper, JP7, which can disable the latch to allow the POR into monitor function.  The DS1233-10 is still in place so in either mode, we still have POR.



Back on the main chassis, I had to bring the RUN_U signal back to the RUN_MODE jumper to allow the selection between RUN_U/RUN_P and VIP style jamming for the monitor at 0x8000.


taf123
 

The next change has to do with the CDP1879 Real Time Clock.  The discrete 32.768kHz oscillator had some problems; it's not accurate enough, the frequency would shift with the lower voltage during battery backup, and it eats the cell batteries I was using.

To "fix" all of this, I replaced the discrete oscillator with a DIL-8 can oscillator and removed the battery backup components.  In addition to fixing the accuracy of the clock, this also frees up much needed space on the board for the other changes I had in mind.

Of course it means I don't have battery backup for the RTC any more, but I may go back and fix that at a later date.

taf123
 

I like the idea of generating the baud clock for the CDP1854 UART from the CPU clock - it seems to go along with the COSMAC philosophy of component reduction.  But, there are times when I'll want to change the CPU clock, but still want to use the UART, so I have to decouple this.

Using the space freed up by removing the RTC battery backup, I installed a DIL-8 2.4576Mhz can oscillator.

taf123
 

Next up was to fix the hack I had to use to get the AMD 9511 APU working. As a reminder, the AMD 9511 requires the !MRD signal to be delayed until after the low byte of the address has stabilized on the multiplexed address bus. I was limited to existing spare gates on the board, so I used the combined propagation delay of several gates. This worked, but was a terrible hack.

Using some of the freed up space by removing the RTC discreet oscillator, I was able to rebuild this.



I installed a 74HC74 (U42) dual D flip-flop and a 4011 (U34) quad two-input NAND. Using this, I created the delay synchronously to the clock and system timing pulse TPA.

I invert TPA, so the trailing edge clocks FF A and sets its Q. The output of this is connected to the D-input of FF B. On the next raising edge of the system clock, this sets the Q, and clears the !Q of FF B.

The !Q of FF B does two things. First, it clears the FF A, resetting it until the next machine cycle. It also sets the RS FF made up of two of the NAND gates, the output of which I've labeled WINDOW.

WINDOW is then gated with MRD (inverted !MRD), to provide a delayed !MRD (!DMRD) for the APU.

At the start of the next machine cycle, the inverted TPA signal resets the RS FF and the cycle repeats.

Phew. But does it work?




Since the APU is memory-mapped, I can operate it directly from UT4. This is the same test I performed before where I add the two 16-bit numbers 0x1234 and 0x5678).

The first four lines push the four bytes onto the APU internal stack. The fifth line issues the 16-bit add (0x6C).

The next lines pops the resulting two bytes off of the APU internal stack. The result is 0x68AC, which is correct.

The final two lines reads the APU status register. Value 0x00 means that the APU is currently idle and that the previous result did not cause a carry/borrow.

So it works. Yeah!


I wanted to look at the signals in detail, but that £18 logic analyzer I have been using, operating at only 8Mhz, just doesn't give good enough results (or at least I wasn't happy with them any more).  So I got an upgrade - a Kingst LA5016 USB 16-channel Logic Analyzer with a 500MHz sampling rate.  This cost £132 (about $160) on eBay from China.



This comes with its own software, which is quite a step up from the sigrok I was using with the other system.






The screen shows the read of the APU status register at 0xCC01. The ten channels were connected as follows:

00 - CPU Clock
01 - TPA
02 - TPB
03 - !TPA
04 - D1TPA
05 - !D2TPA
06 - WINDOW
07 - !MRD
08 - !DMRD
09 - APU !CS
10 - MA0

The green vertical lines are measurement pairs, with the results in the box on the right.

The difference between B1 & B2, the left pair, is the fetch machine cycle of eight CPU clock cycles.  With my manual placement, the results showed 3.254us, which is close enough to the 3.255us from a 2.4576Mhz CPU clock.

The middle pair, A1 & A2, shows the amount that !MRD has been delayed (!DMRD) compared to the transition of MA0 to the HIGH state, which is 166ns.

According to the ADM 9511 datasheet, TCDR, C/!D (connected to MA0) to !RD LOW set up time, must be at least 0ns (!RD must happen after C/!D is stable, not before).

Looks good.  I didn't measure the actual delay achieved between !MRD and !DMRD, but it looks about 1.5 CPU clock cycles.

taf123
 

The final update has to do with the two-level I/O system.  Sub-routines and Interrupt Service Routines shouldn't leave any unwanted side-effects.  For processing I/O, this means they should set the I/O-level back to whatever it was before they select a new I/O-level to process whatever it is they need to I/O.

With the two-level system as built, it's write-only.  To achieve the required transparency, all I/O processing would need to save the I/O-level into a well-known memory location so sub-routines and ISRs could reset it when they were finished.

I don't like this option.

Using the last of the freed up space from the RTC rebuild, I installed a 74HC244 3-state Octal Buffer which simply reads the current state of the I/O-level as set in the CDP1875 output port I'm using as the I/O-level latch and sends it to the buffered I/O-bus.  This is connected to the previously unused INP 1 (69) command.  Now sub-routines and ISRs can use INP 1 to read the current I/O-level, store in a temp location, do their thing, and then reset I/O-level by outputting the stored value.





You'll note that I'm not using the full complexity of the RCA two-level system.  Instead, I just use each bit as an individual selector, giving eight I/O levels, labelled IO0 - IO7.

For a quick test of this, I set bit 2 of the I/O level (IO2) by outputting 0x04. This I read back in using the new support for INP 1. I then changed the I/O level so bit 0 was set (IO0), enabling the HEX displays, and output the previously read 0x04 to the display.



This picture also captures all of the recent changes.

taf123
 

For those interested, I've uploaded the current versions of the Main and CDP-IO schematic files to https://groups.io/g/cosmacelf/files/Todd%27s%20ELF-ish%20Schematics