performance godlike in build 7312?

i just installed the newest eap, and i am more than surpised. the memory consumption has usually been reaching 600mb within minutes and idea tortured my cpu (probably gc), now i'm at ~300mb instead, the code completition response time has also increased, the "first app start after idea restart because there was a new version of a plugin i use" is faster then before (idea sometimes locked up for a couple of minutes - this didn't happen since the 7294-eap anymore), code editing in general is a lot smoother.

has anything been done about this, or is this caused by
+ deleting all idea caches
+ increasing the max memory to 900mb instead of 600? (this would make no sense, as idea is just using about 300mb right now)

to avoid confusion:
i work on 2 computer, 1 is a 3.5 ghz core2duo, overclocked, biggest project 5k classes, max memory 400mb, never had any problems.
the other one is a 3ghz p4, hyperthreaded, 2gb ram, max memory was 600, then 800, deleted the caches, now the xmx is 900. editing has been sluggish until i've set the memory to 800 (it was the 727x-build), project size 14k classes, but after i installed 7312, the editor has become noticeably faster compared to the last eap. i had to wait seconds for a code completition stroke to show me a result, now it's just there...

Message was edited by:
HamsterofDeath

0
30 comments

You will definitely see a performance increase going from 600MB heap to 900MB. Unless you have a pathologically crappy box, I generally recommend 1024MB of heap for IDEA. The fact that your heap monitor only shows 300MB doesn't matter. You will still see a speed increase with increased heap. No, I don't know why that is either.

--Dave Griffith

0

Holy cow! 1024MB heap??? What are you doing with IDEA that warrants such a huge heap? I edit a rather large POJO project using the out-of-the-box 192MB heap size with no problems.

0

No,
I don't know why that is either.

--Dave Griffith


how did you know what i was going to ask?

0

I've got the same question.

I just recently had to bump it up to 256 working with a project with 20+ modules all with ejb/app/web/etc..

0

Don't get me wrong. IDEA will certainly work with smaller heaps for many projects. You'll just get a quite noticable performance boost if you configure for larger heap, if you've got enough RAM. I imagine this is because more indices can be kept in memory, but I'm admittedly just guessing.

--Dave Griffith

0

What starting heap size would one use with a max heap size of 1024m?

0

Hello Dave,

Don't get me wrong. IDEA will certainly work with smaller
heaps for many projects. You'll just get a quite noticable
performance boost if you configure for larger heap, if you've got
enough RAM. I imagine this is because more indices can be kept in
memory, but I'm admittedly just guessing.


I would expect the opposite, in fact. IDEA doesn't have any algorithms to
adapt cache sizes to available JVM memory. On the other hand, making the
memory available to JVM takes it away from OS disk caches, which would be
a more efficient use of it.

--
Dmitry Jemerov
Software Developer
JetBrains, Inc.
http://www.jetbrains.com/
"Develop with Pleasure!"


0

Just reporting my experiences. Just tried running with a 256MB heap again, and editor update notably lagged typing. Reverting to 1024MB fixed the performance issue. At no point did the heap monitor show more than 130MB.

--Dave Griffith

0

MacBook Pro, 2Gigs Ram but otherwise stock, running under a 1.5 JVM, if it matters.

--Dave Griffith

0

I'm taking you have at least 4GB of physical RAM. If you have 2GB, allocating 1Gb for IDEA alone will hurt the disk cache, like Dmitry wisely pointed out.

Edit: Hmm. I know OS X is definitely smarter than Windows when it comes to handling inactive memory and disk caching. It's been a while since I last used IDEA on my Macbook. I'll give it a try tonight and see how it goes.

0

Holy cow! 1024MB heap??? What are you doing with
IDEA that warrants such a huge heap? I edit a rather
large POJO project using the out-of-the-box 192MB
heap size with no problems.


Here's my vmoptions file (From Idea 6.0.5)
-Xverify:none
-Xms128m
-Xmx192m
-XX:MaxPermSize=99m
-ea

Java 6 update 2. WinXP.

Project size is 1500 java files (2000 classes), Hundreds of resource files. Sometimes IDEA has ran smoothly for a whole week of heavy coding without restarts.

I've heavily tuned the windows xp to consume minimal resources. No paging file at all. (windows doesn't have the option to swap java or idea of the files I'm working with to swap file) About 30 processes shown in task damager. No antivirus software, just a firewall.

My only complain is cold start performance, but since cold start happens less than once a week that isn't really a problem from my perspective. =)

P.S.
I've uninstalled all the plugins that are not needed.

0

On 2007-09-29 01:57:36 +0400, Hezekiel <no_reply@jetbrains.com> said:

task damager


Nice typo. I like it!

0

confirm

but is it idea, or is it java?

0

JOI, What's the OS memory footprint after a boot with the minimum
programs running?

N.

Dave Griffith wrote:

MacBook Pro, 2Gigs Ram but otherwise stock, running under a 1.5 JVM, if it matters.

--Dave Griffith

0

It goes along with File Mangler and, my personal favorite, Windows Exploder.

RRS

0

I've not seen similar issues with running other large applications on Apple's JVM, but I'm sure IDEA is taxing the VM/OS in ways that other applications don't (memory mapped files? file system listening? gremlins?).

--Dave Griffith

0

Here are the settings I"m using:

-Xms256m
-Xmx1536
m
-XX:MaxPermSize=99m

The main need for a large heap is when I run Inspect->Analyze. Actually 1.5 GB is nowhere near enough to run Inspect->Analyze on my whole project with my default Error profile. I can only run it on the whole project if I only pick a handful of inspections to run. I guess IDEA is caching each file that has an inspection warning, which quickly adds up to a huge amount of memory. It would be more efficient if it just stored code pointers.

When I start IDEA 7312, it says in status bar "78 of 254". But after using it for a day, and then trying to close every single panel and hitting GC button multiple times, it says "238 of 566". I wonder what objects IDEA is still holding onto? Usually, I just restart IDEA every day, or after 8 hours of heavy coding, whichever comes first, because it starts getting slowed down if the heap gets too big.

I wouldn't mind a "Nuke panels/Restore IDEA to Startup Condition" button next to the GC button. What this would do is close all tabs, close all panels (run, debug,find,inspect,etc.) and nuke any other internally cached objects, to get IDEA as close as possible to original startup condition without having to startup again.

I less impressive sounding name but maybe more appropriate would be "Warm Restart". You could have it under File or Tools or Help menu.

Message was edited by:
Alex

0

Dmitry Jemerov wrote:

IDEA doesn't have any algorithms
to adapt cache sizes to available JVM memory.


I assumed you used soft references for some caches? If you do,
shouldn't IDEA automatically benefit from a larger heap?

IIRC, soft references survive (by default) for up to 1000 ms for every
MB of free space in the heap, calculated (for the server VM) using the
maximum heap size. For example:

-Xmx256M, allocated 192M => soft refs cleared after 64 secs
-Xmx512M, allocated 192M => soft refs cleared after 384 secs

And this is true even if the soft references themselves only use a few
MB of memory and retaining them would not have caused an OOM error. If
they are too old relative to the amount of "possibly allocatable"
memory, they are flushed.

(See also -XX:SoftRefLRUPolicyMSPerMB)

0

Hello Jonas,

>> IDEA doesn't have any algorithms to adapt cache sizes to available
>> JVM memory.
>>

I assumed you used soft references for some caches? If you do,
shouldn't IDEA automatically benefit from a larger heap?


Yes, we do.

IIRC, soft references survive (by default) for up to 1000 ms for every
MB of free space in the heap, calculated (for the server VM) using the
maximum heap size. For example:

-Xmx256M, allocated 192M => soft refs cleared after 64 secs
-Xmx512M, allocated 192M => soft refs cleared after 384 secs
And this is true even if the soft references themselves only use a few
MB of memory and retaining them would not have caused an OOM error.
If they are too old relative to the amount of "possibly allocatable"
memory, they are flushed.


Interesting. :) I actually never looked into the details of soft references
implementation. In this case, indeed, higher -Xmx should help many of IDEA's
caches live longer.

--
Dmitry Jemerov
Software Developer
JetBrains, Inc.
http://www.jetbrains.com/
"Develop with Pleasure!"


0

That's wild. I can now awe people by letting them know that max heap size matters even if it's never used. Thanks.

--Dave Griffith

0

Hello Jonas,

IIRC, soft references survive (by default) for up to 1000 ms for every
MB of free space in the heap, calculated (for the server VM) using the
maximum heap size. For example:

-Xmx256M, allocated 192M => soft refs cleared after 64 secs
-Xmx512M, allocated 192M => soft refs cleared after 384 secs


Wow, I always thought SoftReferences get only cleared when the heap reaches its limit, with a
different behavior between client and server VM: The client VM prefers clearing refs instead of
growing the heap while the server VM increases the heap and keeps the refs as long as possible.

Can you point to a source for this time-dependency?

Sascha

0

Can you point to a source for this time-dependency?


Sure. Take a look at
http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html:

Soft references are kept alive longer in the server virtual machine than in the client. The rate of clearing can be controlled with the command line option -XX:SoftRefLRUPolicyMSPerMB=<N>, which specifies the number of milliseconds a soft reference will be kept alive (once it is no longer strongly reachable) for each megabyte of free space in the heap. The default value is 1000 ms per megabyte, which means that a soft reference will survive (after the last strong reference to the object has been collected) for 1 second for each megabyte of free space in the heap. Note that this is an approximate figure since soft references are cleared only during garbage collection, which may occur sporadically.


Also, http://java.sun.com/docs/hotspot/HotSpotFAQ.html:

Starting with 1.3.1, softly reachable objects will remain alive for some amount of time after the last time they were referenced. The default value is one second of lifetime per free megabyte in the heap. This value can be adjusted using the -XX:SoftRefLRUPolicyMSPerMB flag, which accepts integer values representing milliseconds. For example, to change the value from one second to 2.5 seconds, use this flag:

-XX:SoftRefLRUPolicyMSPerMB=2500

The Java HotSpot Server VM uses the maximum possible heap size (as set with the -Xmx option) to calculate free space remaining.

The Java Hotspot Client VM uses the current heap size to calculate the free space.

>
>



0

Hmmm, sorry if the quotes didn't wrap (they don't on my machine at least).

In any case, I guess the reasoning behind this would be that just
because one configures a certain maximum heap size, applications using
soft references shouldn't automatically grow to that maximum size just
because they have been running for a while; they wanted some way of
removing references that are not likely to be reused, and implemented a
time-based scheme as a kind of heuristic measure of whether the data is
still worth retaining in memory.

It should still be possible to let soft references grow up to the bound
of the current maximum heap size by setting
-XX:SoftRefLRUPolicyMSPerMB=999999999 or something like that, though
I've never tried that.

0

Hi Sascha,

For more information, please check this link:

http://java.sun.com/docs/hotspot/HotSpotFAQ.html

"Starting with 1.3.1, softly reachable objects will remain alive for some amount of time after the last time they were referenced. The default value is one second of lifetime per free megabyte in the heap. This value can be adjusted using the -XX:SoftRefLRUPolicyMSPerMB flag, which accepts integer values representing milliseconds."

0

Jonas Kvarnström wrote:
>> The Java HotSpot Server VM uses the maximum possible heap size (as set
>> with the -Xmx option) to calculate free space remaining.
>>
>> The Java Hotspot Client VM uses the current heap size to calculate the
>> free space.

>


So if I'm reading this right, setting your -Xmx high shouldn't make a
blind bit of difference, unless you're using the Server VM.

So the question to Dave is, which are you using? Or is it possible the
Mac Client VM works more like the Server VM in this respect?

N.

0

Nathan Brown wrote:

So if I'm reading this right, setting your -Xmx high shouldn't make a
blind bit of difference, unless you're using the Server VM.


That's true. I guess I subconsciously assumed IDEA was using the server
VM... I haven't done any tests lately, but when I was doing some
benchmark testing with a CPU-intensive piece of software the server VM
was a bit slower to start but won by a great margin for any tests that
took more than a minute or so, since the client VM completely omitted
certain optimizations. And IDEA certainly tends to be CPU-bound for
many operations, and is kept running for days at a time (at least here).
But anyway, those tests were done years ago and things may well have
changed since then, so I shouldn't have made any assumptions about what
VM was used by IDEA.

0

Yeah, I guess you could argue that IDEA is as much a 'server' style
application as a 'client' application in that it employs many background
threads, a lot of caching, and performs a great deal of logical
processing behind the scenes, the likes of which in a classic
client/server or multi-tiered architecture would be handled by a server
side layer.

Time to experiment again I think!

N.

Jonas Kvarnström wrote:

Nathan Brown wrote:

>> So if I'm reading this right, setting your -Xmx high shouldn't make a
>> blind bit of difference, unless you're using the Server VM.


That's true. I guess I subconsciously assumed IDEA was using the server
VM... I haven't done any tests lately, but when I was doing some
benchmark testing with a CPU-intensive piece of software the server VM
was a bit slower to start but won by a great margin for any tests that
took more than a minute or so, since the client VM completely omitted
certain optimizations. And IDEA certainly tends to be CPU-bound for
many operations, and is kept running for days at a time (at least here).
But anyway, those tests were done years ago and things may well have
changed since then, so I shouldn't have made any assumptions about what
VM was used by IDEA.

0

...

Time to experiment again I think!


Definitely, and I'd be interested to read what you discover.

Keep in mind that perceived performance during start-up and early operation (possibly for a few minutes as different code paths get exercised) is likely to be worse under the server-mode JVM, since it does much more work to decide what's worth native compilation and what's worth in-lining.


Randall Schulz

0

Nathan Brown wrote:

So the question to Dave is, which are you using? Or
is it possible the
Mac Client VM works more like the Server VM in this
respect?


Also keep in mind that some JVMs can automatically switch to server mode depending on the machine hardware configuration. For Sun JVMs (except for i586 Windows) they use server implicitly -client is not specified and the machine has 2 or more CPUs (probably cores), and 2GB or more memory.
I don't have the rules for Mac Intel JVMs right now, but it's probably related.

Maarten

0

i can't be sure but on my work machine, using sun 1.5.0_07 I believe, it says the default is the client vm (when running java -X to see the help on the extended options).

This machine is a P4, single core (no hyperthreading as far as i can remember) with 4GB mem, running windows2000.

0

Please sign in to leave a comment.