Issue with DataGrip data export to CSV on big data volume.

Hi Team,

DataGrip has an issue with data export to CSV on big data volume. What I mean by saying issue? On 2.5 GHz Intel Core i7 with 16 Gb memory a data extract of 182 million rows failed. Total size of the result CSV created by psql \copy was 36 Gb. The main point - 100% CPU consuming plus a good hit on memory, and the notebook was sounding like a Super Hornet taking off.

I could assume DataGrip tries to get all data for a result set before to spool data to disk. Though I have no evidence, except one - file was empty for 20 minutes before I killed DataGrip process. As result I was forced to use psql \copy command.

Could you please take a look at the case, and remediate DataGrip.

 

With best regards,

Kostyantyn

 

DataGrip 2017.1.4
Build #DB-171.4694.3, built on May 16, 2017
Licensed to Kostyantyn Krakovych
Subscription is active until September 24, 2017
JRE: 1.8.0_112-release-736-b21 x86_64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Mac OS X 10.12.4

4
5 comments

Hi,

As a temporary workaround, you can use pg_dump utility. DataGrip can be integrated with it:


Thank you.

0
Avatar
Permanently deleted user

I can concur I have the exact same problem on a 15G table.
Thanks for your post!

0

@ Philippe Lavoie-mongrain Hi,

Try to use native tool to dump big tables.

Thank you.

0

DataGrip still chokes on even moderately large exports. Is there an issue in the tracker for this? If so I have been unable to locate it.

2

Yep, it's a real shame that when trying to dump large tables DataGrip just silently fails and outputs 0-sized file. It would be nice to be able to rely on the tools we use every day.

Using pg_dump is often out of the question because it doesn't support outputting TSV/CSV/JSON files, which are handy for moving data between different kinds of systems reliably.

0

Please sign in to leave a comment.