Source code of non bootstrapped version of compiler?
pr.dimiii@...
Hi, devs! I think it would be useful for community to have an IDE for Stanza. I've been trying to implement Eclipse based IDE using Xtext technology. I have stalled after implementation of top level forms. |
|
Hi! Many thanks for trying to provide Eclipse support for Stanza! It's an often requested feature, but none of us on the core team are familiar with it. The original bootstrap compiler was written in Scheme and the language has deviated a lot since then, so I don't think you'll find that helpful. The most important hurdle to solve is that Stanza's grammar is handled by two distinct phases. The first phase (in core/reader.stanza) converts a list of characters into an s-expression. This uses a custom hand-coded lexer, and you'll have to dig through the source if you want 100% compatibility. The actual parsing of the s-expression is much more systematically handled. The definition in compiler/stz-core-macros.stanza define a pretty readable grammar. Cheers. -Patrick |
|
Hi. Here's a little program that will read in a file and print out the lexed s-expressions within it. Additionally, it will also expand it using Stanza's core macros and you can see the actual AST that is compiled by the Stanza compiler. You can run it using: ./harness somefile.txt The macro system requires just a little bit of cleanup and then we'll release and document it fully. Hope this helps! -Patrick |
|
defpackage mypackage : |
|
Seems like this API may have changed. I get:
I was able to use the parse-syntax function like so:
Is there a way to get all symbols instead of just those in the current file? |
|
Sure! Do you have a list of filenames that you want to analyze? You can also use a small function to collect all the |
|
I mean, how do I add the builtins and imported terms to the results? On Thu, Jun 30, 2022, 12:40 AM Patrick S. Li <patrickli.2001@...> wrote:
|
|
Oh I see, For the builtins, e.g. the standard macros and language constructs, you might need to just hardcode a list. For the imported terms, a strategy that might work is:
- Scan through the |
|
What about the Thank you for your help! |
|
Ah that's right! The thought completely slipped my mind. That's definitely the easiest way. You can deserialize the file using this command:
The actual structs for the definitions database are defined in the package |
|
So, I am making some headway towards a makeshift language server. The actual server is a node/shell script (server.mjs) using google/zx. It makes calls to the stanza compiler to rebuild the defs database and also to a stanza script (deserialize.stanza) that deserializes for the language server to then utilize in providing the details. I also convert the package paths for The main extension will start the zx script and request details through a console. The zx server script watches for file changes to rebuild the defsdb and reads the console commands, performing searches against the serialized definitions data to gather the required info. Using this workflow, I should be able to support these LSP actions:
I will need to make some decisions on how to best handle settings (and error messages) for the Let me know what you think! |
|