15 comments

Great job.

The additional explainations are great.

You know I do break this rule in my plugin:

* !!!WARNING!!! PSI element types should be unambiguously determined by AST node element types.
* You can not produce different PSI elements from AST nodes of the same types (e.g. based on AST node content).
* Typically, your code should be as simple as that:
0

Hahahah! I'll try to stick to the rules myself ;)  Now I will try to impl some of what I wrote. Then will explore the naming API. Hoping Demitry will give me a good starting point.
Ter

0

I don't think I implemented naming or references quite correctly to this day.

I didn't use the qualified names imeplementation. I cam up with my own.

Basically I built my PSI trees with separate reference elements which enclosed any variable usage, but they just pointed to the enclosed element. So it was basically a decorator. To find the actual element refereed to by the reference I called resolve in the annotator imeplementation which would resolve to another element (which was usually also enclosed in a rerference element itself).

I know I would do a few things differently today, but these undertakings are huge projects. Maybe I will get lucky and find a company willing to let me work on the Lua IDE as part of my job someday. There are so many things I would love to improve if I had the time.

One question I never asked is what are LightEleements and why would you use them. I undertand lazy to the extent that you can layer your PSI tree so that information that belongs together can be processed on-demand. The easy example is JavaDoc comments. But it also seems like it can be used in a way similar to stubs -  so you don't build parts of the tree that you arentr going to use right away.

That reminds me that I need to use stubs more. I use them mostly for names, and storing calculated type information. They are really handy for resolving names over large scopes.

0

You have spent a lot of time reading this code right? I just assumed....

https://github.com/JetBrains/Grammar-Kit

0

I've used it but have not looked at the source code. That generalizes a process I do not understand yet. Once I know how to build a language manually, then I can look to see how they automated in order to build my own ANTLR-based one.  I think that is mostly the parsing part, which I got working fairly quickly thanks to the documentation on the key interfaces and lots of experience building parsers. I've also looked at the source code for Lua, properties, JavaCC plugin, Clojure, and a few others trying to abstract from all of these what is required to build a language plug-in.

BTW, I was totally wrong about ASTFactory I just found out and so I updated my page. The ASTFactory seems to createparse tree internal nodes and leaves that can be psi or parse tree nodes. For example, the CoreASTFactory creates LeafPsiElement nodes by default.

0

Yep, I think I am going to create special psi nodes according to whether it's a variable or function reference, but that might not be the approved mechanism. I will know more later. I keep running into mistakes I've made in what I've written down. I am too old to keep all of this in my head ;) I think someone from jetbrains will help me get the right approach.

Hahahah.  This morning, I did not even know what a stub was. I found a random presentation on the web from 2008. By the way, searching for ASTFactory gets zero hits on this development wiki. I guess Google will send people to my page :)

0

The AstFactory just lets you build your PsiTree. There are only a few ast node types. The extension lets you add some code if you need to to help build the right psi nodes from the ast elements.,

Lets say a whitespace has a special meaning in some location only known once the document is parsed. Perhaps that would be a use-case.

0

Please allow me to add some clarification on this one.

Grammar-Kit beyond generating stuff and using it itself is in reality a very compact example of fully-featured language support plugin.
IntelliJ API is used to provide references, structure view, some refactorings, code inspection and intention actions. No stubs!

BNF language is just a bit more complex than the one you are trying to manually build.
All entry points are listed in the META-INF/plugin.xml so one can can navigate to the corresponding implementation in one click.
The generated parser is also very compact (as long as you do not go into GeneratedParserUtilBase) and mostly just reflects the original BNF
so one can observe the whole lexer->parser->AST nodes->PSI elements chain.


On IntelliJ API and plugin development:

The platform codebase is rather large and many tasks can be achieved in several but not identical ways. One can grok the difference, pros & contras only
by reading community project source on github. It would take a lot of words to describe the meaning and side-effects of a method just 10 lines long.

I highly recommend cloning https://github.com/JetBrains/intellij-community and setting it as your plugin IntelliJ SDK sources in Project Structure/SDKs.
Then you will be able to see what really happens under the hood and many other usages of some API will be instantly available for reading.
As a plugin developer myself I find it impossible to proceed (or even start) without this step.

All our (new and old) team members employ this read-code-and-do-as-they-did approach and many questions dissipate,
like for example lexer threadsafety or why lexer state cannot be used in parser.
Problems arise mostly when one doesn't know where to start looking for something or when API is split into separate collaborating parts.
Documentation is just a starting point and not really a contract definition, it tries to mention basic concepts though.
We keep openapi more or less stable but current state of things may be different so the code is the primary source of knowledge.
Only automated refactorings, continuous integration, version control history and YouTrack make the whole project manageable.

Hope this will help

0

It is true that OpenApi is stable more or less,  but even then just keeping up with its evolution over time can be a bit of work.

On the other hand - it is so much fun to be a part of, and each day you work on it you feel like you are learning new and fascinating things. It becomes quite addictive.

I cannot tell you how much time I have spent reading the intelli8j-community code and the Git commit logs, and of course - like any decent open source developer, stealing every bit of code from evey project I could find.

There is so much to learn when writing a language plugin, you have to use every resource out there,  look at 5-6 language plaugins, see what they all do, and what only some do.

I literally went from having 0 lines of code and never hanving even looked at a plugin to something that worked in about 3-4 months and did most if not all of the language support extensions to one degree or another during that time. So if I could do it - so could someone else. Truth be told though there are only a few tthat have been written, and (this was before grammarkit) no real basic canonical example.

The best one I thought was the JS plugin, and I had to search to find the original posted sources since they weren't available.

My best friend was PsiViewer, which I am now helping to maintain.

And, I'll be honest. JD. - mostly to understand how API's were used, get ideas, and figure out what extension points would be useful to write.

Overall it has been fun and painless. I have had to bug Dmitry and Max several times, and at least one "make or  break" time, but they are always so helpful.

Sometimes the best documentation is working code, and the occasional question - but as someone who is learning you hate to bug people with lots of questions tthat you could answer on your own. I understand that part of iot - and I try to answer questions myself here sometimes when I see a new language plugin writer. I see some of the others doing it also from time to time too.

But to circle back - I do wish I had had GrammarKit beofre I started. It was written basically at the same time my language plugin was, and I didn't want to start over.

My biggest regret was not writing unit tests - but on the other hand my plugin went through so many major changes - they would have been a heavy burden to maintain. Now they would be great, and I am trying to write them as I find bugs. But for someone doing this for free in their spare time  - I really enjoyed writing new features more - so that is what I did.

0

Hi Jon, cool. Yeah, I decided to look into the annotator stuff at the same time undoing the references. I think I see how references work now and using a symbol table that maps name to definition and PSI node is in order. That might be better done in the annotator but I have not looked at that at all yet. I will leave stubs for when I have to do multi file stuff.

0

Thanks, Greg. I will now look into the source code of that plug-in as well.

Regarding the comments in the code.  The most useful thing in a comment is not how something works because the code says that. I think what people need is a simple one-liner that says what the class is for and, if you are feeling generous, how it relates to the overall process or other classes.  It would take you guys three seconds and save every single developer three hours who tries to build plug-in. This is what I'm trying to do in that document I pointed you at. I encourage you to borrow the comments that happen to be correct from my notes for the source code. I need all of the documentation in one place as a cheat sheet at the moment so I have not created a fork at github.

Intellij is by far the best development environment in my opinion which is why I'm pushing through despite my frustration. Fortunately, your code base is extremely good and building a plug-in has turned out to be vastly simpler than NetBeans or eclipse.

0

Jon, the ASTFactory appears to only let you create PSI nodes for leaves, not internal nodes.   It only lets you create parse tree node not PSI nodes for the internal nodes.  To create PSI nodes for those internal parse tree nodes, you have to use that createElement in the parser definition, right?

0

Hiya. Just for a forwarding ref, I have moved the doc to https://github.com/antlr/jetbrains which is a library to support the use of ANTLR grammars in jetbrains IDEs for building custom languages.

DOC https://raw.githubusercontent.com/antlr/jetbrains/master/doc/plugin-dev-notes.md

1

Sounds interesting Terence, but I don't see the library code? Or is it in a different repository?

0

hi Jeremy, I just built a quick plug-in that uses purely ANTLR parse trees.

https://github.com/antlr/jetbrains-plugin-st4

I will be updating that repository that has the development notes as I come up with a generic library. Feel free to add to the development notes via pull request.

Terence

0

Please sign in to leave a comment.