Error Processing Large XML/SQL files. NEEDS TO BE FIXED BEFORE RELEASE
This worked about 4 or 5 releases ago. My company has very large sql files and xml files used to populate the databases (80-190MB) and no matter how much ram I allocate IntelliJ now crashes in these files are even in the path of the project. I don't have to open them.
This is causing a huge issue. I have to move the files out of the project and put them back in and run the ant scripts for the db outside of the project. If I accidently put them back in when the ide is running I have to force it to quit because of fatal errors.
Please fix this ASAP.
Please sign in to leave a comment.
Which build you are using?
We fixed similar issue just before IDEA 6 release.
Gabriel Harrison wrote:
--
Best regards,
Maxim Mossienko
IntelliJ Labs / JetBrains Inc.
http://www.intellij.com
"Develop with pleasure!"
I am using 5755 which is the latest EAP release.
I am using build 5755. This is the latest EAP release I know of and it still exists. My projects crash when the files are added and the IDE is running. Additionally I cannot start up with them in the path.
How about trying the real 6.0 release, build #5766.
I will try it. I wasn't aware there was a final release.
I still get the same error. It is successful with files up to 123MB but the larger of the two that I have in the project at 175MB fails no matter what. A better algorithm that isn't dependent on the file size is needed. Something that can page data out of memory if it exceeds the maximums is needed.
Take a look at other OS editors such as some for the Amiga who have many file editors that are capable of opening any sized file. They code knowing they are up against limited resources and deal with large files in chunks.
Attachment(s):
idea60_failure.jpg
Can't you just exclude the directory thta contains these 100-200 MB XML files from the workspace? Do you really need to edit them with IDEA?
You can still run your ant tasks on the files even if IDEA doesn't parse them.
I agree that IDEA should handle it, esp. if you aren't loading them into the Editor, but.. maybe you should just avoid the problem?
You said it worked with 100 MB file but not the 175 MB file, have you tried allocating the max available memory with -3GB switch or try the 64-bit JVM (jdk-1_5_0_09-windows-amd64.exe) which allows much larger memory sizes.
I personally wouldn't want IDEA to be using GBs of heap space because the garbage collector will probably make it sluggish.
Another possibility. Can you create a special filetype with file pattern which only includes these database XML files e..g "dbload.xml". You could have these treated as simple TXT files. That might reduce the IDEA processing load. But I haven't tried this. I'm not sure if an order of processing of the filetypes is guaranteed, i.e. would "dbload.xml" match the "dbload.xml" pattern for filetype "bigfiles" or will it match "*.xml" pattern of filetype XML?
Fixed in 6.0.1
IDEA still behaves really really poor with large XML files. Even files in the 500K..1Mb range causes mondo problems. Parsing takes several seconds and if there's anything anywhere in them that IDEA thinks is an error, it'll re-parse it on every frame activation (ignoring my settings to NOT synchronize anything on frame activation/de-activation).
I disabled highlighting completely for big files, still I agree the performance is very poor for bigger files.
How is it fixed in 6.01 out of curiosity? Is it simply disabled for large files? How large is large? Have you considered a threaded background parsing scheme with the option setting when the threaded approach starts? Perhaps this could be used until you come up with a decent algorithm for parsing and working with large files.
Basically the point is that IntelliJ is being used by large corporations with large projects. Smaller people will be far less likely to pay the premium price for IntelliJ. Therefore you have to assume that large projects are more likely the norm than smaller ones.
I appreciate the speed with which a temporary fix has been provided (perhaps even a permanent one; since you weren't clear on how you fixed it). But, if you could, please keep in mind your clientèle.
Gabriel
Hi Gabriel,
"Have you considered a threaded background parsing scheme" good idea!
I think JB using this from the beginning by the way ;)
Also interesting how corporation size related to processed files size.
The main problem I think that parsing large file require more objects to store it structure. You may try parse that files using DOM and performance will be not ligting fast I think, but PSI more complicated then DOM files presentation so...
Thanks,
Dmitry
The 6.01 release finally fixed this issue for me. Although other major issues have appeared (at least for me). Thanks for fixing this.