Endless introspection.

We have Oracle database with big development schema. There are ~11K tables, ~12.5K views, ~4.5K packages, ~2.5K object types. Package's size very often reach 10K lines of code and even higher. So DataGrip takes hours, gigabytes and kilowatt hours to introspect that schema. It is not even able to introspect it with default memory settings. Even slight DDL causes knockout for tens of minutes. Moreover database is a thing many developers are working on. Datagrip does not recognize newly created objects from other developers. Schema reintrospection for hours to see one package or table? Is this a joke? Should I send you a logs? Or maybe schema's DDL?
It is obviuous that DataGrips keeps a copy of that schema in memory. Why don't you change this behaviour? I am sure it's better to have a fast and responsive working product with less features than a broken swiss knife.

18 comments
Comment actions Permalink

@v0id ,

Could you email me vasily.chernov@jetbrains.com your zipped IDE & SQL logs for investigation?

0
Comment actions Permalink

Hello, 

> Should I send you a logs? Or maybe schema's DDL?

it'd be very helpful to get such schema. I've never seen a database schema with so many objects, and we've never tested DataGrip on it. Moreover, I cannot ever imagine myself who could author such one. Or it was generated somehow?

> DataGrips keeps a copy of that schema in memory.

It's needed for auto-completion, inspections and highlighting.

> Why don't you change this behaviour? I am sure it's better to have a fast and responsive working product with less features than a broken swiss knife.

We didn't designed DataGrip for operating with such huge schemas, because the vast majority of schemas are not so big. However, we've planned redefining of this concept and redesign the Oracle introspector, and an example of such huge schema will be very helpful in this regard.

 

 

0
Comment actions Permalink

I'm experiencing something pretty similar after upgrading to 2019.3. I submitted an issue here: https://youtrack.jetbrains.com/issue/DBE-9850

0
Comment actions Permalink

I have similar issues. Overall 2019.3 seems to be slower than the previous version, and upgrading to the most recent 3.2 doesn't seem to help a lot. Introspection will usually take the memory footprint to the maximum 4GB I set in the JVM configuration file, and also bogs down my laptop at the same time (High CPU). I think they probably changed something fundamental in 2019.3 that is very difficult to go back so we have to stay with it until it got figured out in the far future or I upgrade to a 32GB laptop.

0
Comment actions Permalink

Is there any Progress on this topic? I am having similar issues with an Oracle EBS "APPS" Schema.

1
Comment actions Permalink

Jakobs, we're working on the topic, but the work requires a pretty long time: Oracle dictionaries are really slow.

To date, we've re-implemented retrieving sources from LONG columns (the problem https://youtrack.jetbrains.com/issue/DBE-10689); we've optimized some queries for Oracle version 12.

You're welcome to update to the latest stable release, it might help.

0
Comment actions Permalink

Thanks for the quick reply. So it seems I have to wait then...I've already upgraded and unfortunately the Problem persists. But I do understand that this is quite a long process and appreciate that you make the effort to enhance this awesome tool even further!

0
Comment actions Permalink

Jakob Drees,

Also, the fix is available in DataGrip 2020.2 EAP.

0
Comment actions Permalink

vasily chernov Unfortunately my introspection in 2020.2 EAP still takes forever and leaves me with incomplete packages, where it recognizes the package itself and any included procedures and functions, but only implements the signature as a comment.

0
Comment actions Permalink

At the moment I use:

IntelliJ IDEA 2020.1.4 (Ultimate Edition)
Build #IU-201.8743.12, built on July 21, 2020
Licensed to *****
Subscription is active until August 23, 2020
Runtime version: 11.0.7+10-b765.65 amd64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.
Windows 10 10.0
GC: ParNew, ConcurrentMarkSweep
Memory: 920M
Cores: 4Non-Bundled Plugins: Lombook Plugin

I connected to an Oracle DB instance, turn off Auto-Inspection for this connection. But anyway, if I try to reach any big scheme for the first time, Idea starts inspecting the schema and it lasts for minutes. My company's DBA usually kill my connection and asks my to stop using Intellij because it overloads the DB. All people in my team use Oracle SQL Developer or DBeaver - both apps are quite fast.
I would like to use Idea because I've already used to it working on my previous projects with smaller DB's.
Can I fix it somehow?

1
Comment actions Permalink

Yeah, I'm just reading these posts.
And today, I'm nervous that I might of bogged down my organization's oracle server because of this product's problem with introspection.
I had been doing some work, then wanted to add an additional schema to the managed schema list and it started introspection.
I ended up walking away as I was kinda signing off for the day, but came back and hour and some change latter and it was still introspecting, granted the schema was massive.
I tried to cancel it and it just kept hanging, ultimately I force quit the application and then wasn't able to connect back to the database.
I'm really hoping that I didn't just bring down my organization's oracle server to it's knees from an introspection on a large schema that took a while.
I love DataGrip and used it with Postgres and Snowflake at my last place of work with no issues. 
Using it for the first time with Oracle...it was looking so good, I was so excited to see packages in full detail and not have to be in SQLDeveloper from Oracle...but I might have to switch back to that for a bit.

1
Comment actions Permalink

I'm having the same issue with excessively slow schema inspections.

DataGrip has always been a poor performer (in comparison to HeidiSQL which can very rapidly inspect an entire database.)  As a result, where our servers may have up to 100 database, I toggle which ones I'm working with so DataGrip only inspects a couple.

However, as of a recent version (perhaps 2019.3) DataGrip became excessively slow. Having maybe 4 database inspected with 200 tables each can take hours to complete.

If I tail the log to see what DataGrip is doing I can see that it's queries are excessively slow.  I.e. Each query it fires can take up to 2 seconds to complete.  If I run that same query using the mysql CLI it takes a fraction of second.  I'm using a local mysql CLI -- so it's not network lag.

---

Aside from DataGrip having very slow execution of queries, it seems to skip obvious optimisations.  I.e. When it queries database users, it does this one user at a time.  If you alter a table in any database, it seems to trigger a full re-inspections of everything where it could just re-inspect the one table that was altered.

I'm hoping there might be a solution because I'm pretty working without the benefits of an introspected database schema because the task ultimately never completes before I need to move the next database and inspection starts all over again.

0
Comment actions Permalink

I have watched the database logs (~/.cache/JetBrains/DataGrip2020.2/database-log/database.log) and identified that DataGrip spends 12 minutes running SHOW GRANT statements.  After inspecting all the grants, a single database takes another 5 minutes to inspect.

DataGrip will run "select grantee from information_schema.user_privileges group by grantee;" which identifies 307 records where it runs a SHOW GRANT for each.  

On my server, there are 307 grantees. One database has about 220 tables with triggers, routines, functions, which might add up to about 300 resources in total.

So if I have 10 databases selected for inspections, it will take DataGrip 62 minutes to complete the inspection.

I noted that if I run a CREATE TEMPORARY TABLE statement, this triggers DataGrip to start running SHOW GRANT statements.

For arguments sake, I ran the same set of SHOW GRANT using mysql CLI and it completed within 31 seconds.  So it neither network lag nor server performance that is contributing to how long DataGrip takes to perform this task.

I also tried switching the database driver back to MySQL 5.1, but it made no difference to how long it took to complete inspection.

0
Comment actions Permalink

We face a similar issue with Snowflake, it's unbearable. I have a MacBook Pro 32GB of RAM and it stays unresponsive for more than 10 minutes!

We have thousands of tables like in any Data Warehouse. It's clearly no doing an efficient "incremental" introspection.

Does anyone face similar issues?

0
Comment actions Permalink

Ana Narciso
I created an issue based on your description https://youtrack.jetbrains.com/issue/DBE-11860. Could you attach zipped IDE log folder and SQL logs?

0
Comment actions Permalink

Courtney Miles,
So do you work with MySQL?
Could you attach IDE and SQL logs to the following issue https://youtrack.jetbrains.com/issue/DBE-11863 ?

0
Comment actions Permalink

Hi vasily chernov,

Thank you for raising an issue.  I have attached logs.

You are correct that I'm using MySql.

1

Please sign in to leave a comment.