Graphical integration of running tests in plugin
Hello,
I am able to run tests in the language for which I'm making a plugin. Currently I extend the CommandLineState class and implement the startProcess function which will run the process that will actually perform the tests on the command line.
However, I would also like to integrate with the visual ui part of the test framework, I would like to show a green or red status bar in case a test reports failure, I would like to show the names of the failed tests (if any) at the end, and so on. I have been trying to look around in some examples, notable the junit plugin, but I don't seem to find my way around in it enough to find where the ui integration enters the room. Can anybody give me a pointer where I'd best start looking?
Kasper
Please sign in to leave a comment.
Hi Kasper,
junit uses custom view for historical reasons. The latest state of art here is SMRunner: it allows you to integrate your listener on the test's side with IDEA's view. com.intellij.execution.testframework.sm.SMTestRunnerConnectionUtil#createAndAttachConsole(java.lang.String, com.intellij.execution.process.ProcessHandler, com.intellij.execution.testframework.TestConsoleProperties, com.intellij.execution.runners.ExecutionEnvironment) could be the starting point as there are usages e.g. in gradle so you may see how it is supposed to work
Anna
Hi Anna,
Thanks for the pointer, I'll start with trying to find the code for gradle and then start playing around with it in my own plugin, will post again when I have more questions!
Kasper
Did you ever get this working? I'm trying to do the same thing and I have it to a point where I've called SMTestRunnerConnectionUtil.createAndAttachConsole() to create the graphical view, but I can't for the life of me figure out what to do next.
My language's unit tests are executed via API calls where you queue up a test run with one call and then poll for status until the test run is complete. Each poll can provide information about which tests have already run, whether those tests passed or failed, how long each took, and of course on failure, the failure message and stack trace information. Standard stuff...
Now I'm not sure how to pass this information to the created console. I've scoured the Python and Gradle example as well as the Erlang plugin source and must be missing the secret sauce! I thought perhaps I needed to report status as ServiceMessages in the associated ProcessHandler, so I've tried to do things like the following in my process handler just to simulate events:
but nothing seems to be happening. I can't find any real documentation on this anywhere.
Any pointers you guys might be able to provide on how to connect one side to the other are GREATLY appreciated!
Hi Scott,
I've never got as far as trying to implement this, but I investigated it a while back. My impression from looking at the code was that the UI is driven by parsing the output from the test runner process, and it expects the messages in the format used by TeamCity which can be built using ServiceMessageBuilder. I'm not sure if they can be passed directly to notifyTextAvailable() or not, though.
If I get a moment in the next couple of days I'll try to implement this, since it's on my shortlist anyway.
Cheers,
Colin
Actually, another thing I would like to know - I see the test runner UI has changed significantly in IntelliJ 15. Is this still driven by SMTestRunner and friends, or is there a new API?
I think I've figured it out. More work to do, and I'll share my findings, but I'm making solid progress now.
Oh great. Thanks for sharing all your findings back here in the forum, too - it's very helpful.
Hi guys,
UI was changed but api was more or less unchanged. Please note deprecations. As sample you may see com.intellij.execution.JavaTestFrameworkRunnableState where console is created. New import test from file/history works through the same api and you may be intereseted in com.intellij.execution.testframework.sm.runner.history.ImportedTestContentHandler where test messages are passed based on xml.
Feel free to ask questions as I am keen to add corresponding javadoc.
Thanks,
Anna
Thanks, Anna. Additional documentation of any form would be very useful. I'm pretty much stumbling through this trying to see what works and what doesn't while looking at the source for both Open API and other plugins that run tests, but it's definitely not the most straightforward part of the system. Ultimately I think the most useful "documentation" that I've found so far is this for Android:
https://android.googlesource.com/platform/tools/idea/+/a6eac331b3d9f0d4168b12356ea256c83f4e9c05/plugins/android/src/org/jetbrains/android/run/testing/AndroidTestListener.java
combined with the constants defined OutputToGeneralTestEventsConverter.MyMessageServiceVisitor and their usages.
As I said, I think I'm finally close to having something that works, but I fear I'm going to miss something important in the implementation.
Thanks again!
Thanks, Anna. I agree with Scott that the most important first step would be just to describe how this API works at a high level. Is my description
above more or less correct?
I think that you described the idea correctly. Just in case I repeate it again: tests are run and test framework reports test results via messages in team city format (testStarted/testFinished messages are printend to output). IDEA reads the output and builds the test tree.
Anna
Yeah, I'm doing pretty much what I posted above using notifyTextAvailable() to STDOUT and ServiceMessageBuilder to create the messages that I post. Right now I'm posting the following messages (all with newlines at the end):
Obviously as I continue to flesh this out, I'll be adding support for testFailed(), testIgnored(), etc., messages. I'm still trying to figure out which attributes have what effect, and I'm also not entirely sure how to use "locationHint" to provide navigation to the test class source. I imagine I'll hit the same thing when I start to write stack traces into the console for failed tests, but I'm assuming I might need to implement a separate extension point for that to work properly in any context.
So yeah, any additional details on this stuff are HUGELY appreciated!
You do everything correctly. LocationHint is remembered in SMTestProxy field. You need to provide com.intellij.execution.testframework.sm.runner.SMTestLocator to the console properties which is passed to com.intellij.execution.testframework.sm.SMTestRunnerConnectionUtil#createConsole(java.lang.String, com.intellij.execution.testframework.TestConsoleProperties). RerunFailedTests action is also created in properties as well as additional filters etc
Thanks! I'll check that out this evening when I actually wire this into the real unit testing API. I'm sure I'll be back with more questions!
Great, thanks Anna. One thing I wanted to ask - I'm interested in making a test runner driven by a file watcher, so that when files are touched some or all of the tests are re-run. This would all be from the same process, so I would output more events for tests that have already been run. Is this possible with this UI? If events are received for tests that have run previously will they be shown to be running again?
Colin,
as far as I saw, when test is finished all consequence events are ignored and debug information is dump to the log. We have some 'auto-test' ability but it restart the whole tree. We will think in this direction but I can't promise anything, sorry.
Anna
Colin, just a quick update...I now have unit tests for my custom language successfully integrated into IDEA! I definitely have plenty to do to get it production-ready, but as of about ten minutes ago I can click the play button and see each of my test classes represented as a suite and each test method under that suite. Then when the results come in through the API, I'm posting the correct message back to the observer and the UI is updated as appropriate. The special sauce is pretty much what I listed above with the ServiceManagerBuilder calls. It's definitely an indirect way to do this, but once you play in this particular sandbox, it actually works quite well.
Let me know if my post above doesn't give you enough info and I'm happy to share more. And of course thanks so much for the guidance, Anna!
That's great, thanks very much, Scott. I'm wrapping up my current release right now and I'm going to try implementing this for the next one. I'll definitely let you know how I get on. Thanks again for posting all your findings here, it's really helpful.
Anna, when I get to this I'll take some notes on the things that are not obvious so they can be documented.
Colin, one other thing I found this morning that's useful. Depending on your unit test framework, you may want to use SMTestRunnerConnectionUtil.createConsoleWithCustomLocator() passing idBasedTreeConstruction=true and setting values for isSuite, parentNodeId, and nodeId as appropriate. If you don't do that, there's an implicit tree structure based on the order in which you send testSuiteStarted/Finished and testStarted/Failed/Ignored/Finished messages. This prohibits being able to show the entire tree of suites/classes, test methods, etc., while the tests are running in the background. By using ID-based nodes, you can construct the entire tree and then send messages about the status of various nodes as it becomes available. Makes for a MUCH better user experience!
That's not quite correct. You may construct tree before tests are started see "tree construction events" from com.intellij.execution.testframework.sm.runner.GeneralTestEventsProcessor
they are called when "rootName", "suiteTreeStarted", "suiteTreeEnded", "suiteTreeNode" messages are received (new for IDEA 15).
After that test events e.g. onTestStarted would search for suite's child with the name and would use it, if found. So you don't need to support unique id's when framework doesn't provide them
Anna
Ah...thanks, Anna. At this point I've already implemented unique ID generation for my suites and tests as they're added to the tree and a map correlating them back to the assigned IDs, but it's certainly good to hear that there's another way to do this. At this point I'm just trying to figure out the various attributes that need to be added to control the behavior and presentation of each type of node. For example, I'm having a little trouble getting the test counts to line up properly across the board, and my failed tests aren't showing a duration in the statistics view. For the former I'm adding "count" with the number of test methods for each suite/test class to the "testSuiteStarted" message, and for the latter I'm adding "duration" with the test method duration in milliseconds for the "testFailed" message, but when tests complete I see:
Done: 10 of 21 Failed: 11 (81.108 s)
Actually...perhaps that one is right if "Done" means "Passed"? I guess I would expect:
Done: 21 of 21 Failed: 11 (81.108 s)
Perhaps I should look at a Java test run for reference...
As for the duration, I'm getting "<UNKNOWN>" for the duration of my failed tests and correct durations for my passed tests. I see SMTestProxy.setDuration() being called from GeneralIdBasedToSMTRunnerEventsConverter.onTestFinished() but not from onTestFailure(). Is that expected?
Any thoughts on whether these are as-designed or perhaps I'm not sending all the attributes I should, or sending them with the wrong values?
Thanks again!
Scott,
count processing is ok; the protocol expects that you'll send testFinished even after testFailed.
Statistics panel is removed in IDEA 15, wording is reworked, now it looks like http://blog.jetbrains.com/idea/2015/07/intellij-idea-15-eap-introduces-new-ui-for-testing/
Thanks,
Anna
That helps yet again, Anna. Hopefully just one more quick question. I now have navigation to the test classes/methods from the tree via a custom locator. I'm trying to add hyperlink navigation through my stack traces. I've created and registered an implementation of a stack trace filter with the ConsoleProperties and see it invoked when I go to source from the statistics view (still IDEA 14 here), but the stack traces that are output to the console aren't being highlighted with hyperlinks. Is there some attribute other than "details" that I need to set when sending a "testFailed"message that might help out here?
UPDATE: I figured it out. I had to register my stack trace filter as a console filter provider. I also registered it as an analyzeStacktraceFilter so now my language's stack traces are hyperlinking everywhere!
Anna, now that I'm sending a testFinished message after testFailed, my counts and durations look right, but I'm getting warning like the following:
[ 77588] WARN - asedToSMTRunnerEventsConvertor - [Apex] Illegal state change [FAILED -> FINISHED]: {id=24, parentId=13, name='testFailure', isSuite=false, state=FAILED}
Are those benign given that everything else looks correct?
It looks like that 'id'-protocol was not ready for duration attribute. Sorry, I've used simple one, will check what can be done.
Thanks
Please watch/vote https://youtrack.jetbrains.com/issue/IDEA-142739. Thanks
Thanks, Anna. For what it's worth, sending testFinished after testFailed actually does seem to yield the exact expected behavior in terms of the final test state. The only issue seems to be the logged warning about an invalid state transition. Do you see any harm in me keeping it this way? My plugin needs to support IDEA 12, 13, and 14, so I need a mechanism that will work that far back.
Scott,
I think that in the worst case, you'll get the same test multiple times. Too much logging could also make IDEA slow. I don't see any other problems here.
Though I would probably stay without id-s, this means that in older versions user's won't get the tree and with IDEA 15 you'll provide the tree in another way.
Anna
Okay, I pretty much have this buttoned up nicely and even working with IDEA 12, 13, and 14 (haven't tried to tackle 15 EAP yet). I have one additional question...if I send a testFailed message followed by a testStdOut message, they get written to the console in the opposite order. It's important to me that the testFailed message be written first because it contains the hyperlinked stack trace. The testStdOut message is really just runtime diagnostic info that might help to understand why the test failed. As it is right now, the user has to scroll to the buttom of the console in order to get to the most important information. Any idea how I can better control the order of these messages aside from the order in which I post them? Thanks!
Hi Scott,
if you post 'output' message outside of testStarted/testFinished, it would be treated as unbounded and won't be visible when user selects the test. It's possible to switch on 'scroll to stack trace' (TestConsoleProperties#SCROLL_TO_STACK_TRACE - which scrolls output to the exception so std output won't be visible) on by default.
Anna