Issues that makes DataGrip an odd tool I can't recommend as a database development tool.

Hi Team,

Issues that makes DataGrip an odd tool I can't recommend for daily use as database development tool.

IF you know that I missed some option or setting that could help to resolve any of the below issues - please let me know. IF I do something in a wrong way, please let me know the right way please.

1. There's no value alignment in result set dependent on column data type. This is a basic way a database tool should work with result set. Without it the tool is useless IF you work a lot with data. https://youtrack.jetbrains.com/issue/DBE-2701 (Created on May, 2016)

2. There's no way to simplify database connection management in case you have more than a few connections. Have you ever worked with Oracle database? Each user/schema has it's own password, a database may have a lot of different users/schemes in scope of the same application. Plus DEV, QAT, UAT, PROD environments. And we have more than 20-50 connections in scope of the same project. It's a madness. A simple grouping could make things looks better. Should I create a separate project for each environment looking on the same repository folder? probably this will be more ridiculous.

3. DataGrip has issue with autocomplete. For Oracle source code:

  • DataGrip knows nothing about, as an example, ALL_SOURCE view even IF I set on all schemas in Options tab of connection properties.
  • IF I have two connections to the same schema on different database environments (DEV, and QAT) - autocomplete will suggest BOTH (!) tables.
  • In case of a bit complex SQL query with WITH clause - DataGrip fails to parse it.
  • Actually, a lot of 'Unable to resolve ...' issues with Oracle source code. Sometimes it looks like 70% of the package is unresolved. As an example, a constant from package specification is not recognized in another package body (sic!).
  • DataGrip does not know about COLUMN_VALUE field of Oracle object type, as an example, AS TABLE OF NUMBER...

4. DataGrip logic of source code execution on a database is sick with f# dualism. If we are going to execute a SQL file from Files tab - we will be asked about what database we should use. Ok - it's normal and acceptable. But when we are going to run a selected SQL query within an open file - DataGrip will silently execute the SQL query on a database associated with the file previously! Yes, it will be asked only first time you run the file, and ... I'm still not sure what will be selected as a target database...

5. For PostgreSQL database I have seen issues with metadata collection. I create a foreign key. IDE uploaded incrementally the foreign key. A few actions by creating additional foreign keys - and first one is gone. Even complete metadata uploading does not help.

6. DataGrip does not know about Oracle CHAR keyword for type definitions https://youtrack.jetbrains.com/issue/DBE-2634

7. Source code style for SQL is very simple... As an example, no chance for below style. Though it's not a mission critical issue as point #1 above.

select a.column_a      a
,a.column_second b
from table_sample a
join table_test b on b.id = a.id
where a.column_c = 12
and a.column_d = 'test';

 

At the moment DataGrip is a tool I can't trust in full.

  • I should check each field data type via system views (just in case, because I don't know is it number or a text). It's great if your database has 20 tables, but if it has more? or you see the database first time and is not familiar with it?
  • I should check actual objects (foreign key, constraints, etc.) related to a table manually. No guarantee metadata collected without a mistake.
  • Autocomplete doesn't work as expected and a high noise of unresolved objects in source code. Sublime or Notepad have an advantage - no useless noise warnings.

 

Best regards,

Kostyantyn

8
33 comments

Hi,

What DataGrip version do you use?

Thank you.

0

Hi,

Currently, I'm on DataGrip 2017.1 EAP. 

Here are some comments:


> 2

We're working on it. As a workaround,  you can name data sources according to your needs and use quick search.

> 3
>> 3.1 Add SYS schema to your data source

>> 3.2 Do you mean autocomplete from sql file? You can configure <file> to <data source> mapping in SQL Resolution Scopes settings

>> 3.3  Could you provide an example (SQL query with WITH clause - DataGrip fails to parse it)?

>> 3.4 We are planning to improve in package resolving.

>> 3.5


> 4

If you specified dialect for your file:


If you didn't specify dialect for your file:

>5 Could you provide more clear steps to reproduce your problem?

>7 Do you mean SQL Formatting?

Thank you for your feedback.

 

1

Hi Vasily,

 

Thank you for the reply.

1. About DataGrip version

DataGrip 2016.3.4
Build #DB-163.13906.13, built on February 21, 2017
Licensed to Kostyantyn Krakovych
Subscription is active until September 24, 2017
JRE: 1.8.0_112-release-408-b6 x86_64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o

2. Let's check. I would like to confirm DataGrip has been improved since I first met it.

>> 3.1 Add SYS schema to your data source

I checked Data Source properties -> Options tab -> SYS schema is checked. Executed Synchronize command against the data source in Database panel. Please check below screenshot: a) Console window does not know SYS schema; b) there's no suggested columns from the system view, though I think it's because SYS was unresolved.

I didn't see someone indicating SYS schema name before a system view in a script or code. As long as PL/SQL Developer and Oracle SQL Developer tools identifies the views and suggest columns correctly I believe its a DataGrip issue. Please check screenshot from Oracle SQL Developer with correct autocomplete and there's not need to enable any option to make it works.


However if we check the same input in a SQL file (not a console) - confirmed. The sys.all_source or simply all_source identified correctly but for some reason DataGrip gets the info from sys@DEV data source that is not open/active at the moment.

>> 3.2 Do you mean autocomplete from sql file? You can configure <file> to <data source> mapping in SQL Resolution Scopes settings

Have you enabled a secret hidden option? On latest release version (I do not use EAP) I do not have such option. Please check the below screenshot.

Just to clarify what I mean by saying two tables from different schemas are suggested, please check the next screenshot. It's not a console, it's a SQL file from a project repository. I'm trying to imaging what if I synchronize all available environments?

>> 3.3  Could you provide an example (SQL query with WITH clause - DataGrip fails to parse it)?

Confirmed, DataGrip has been improved. WITH clause now looks better than it was. Though package bodies still look alarmed. Please check below.

a) A valid code example with $$PLSQL_UNIT. If the system variable is a parameter in a method - Error message will be 'expression expected'.

b) Here is a screenshot of scrollbar with analyze highlighting of a 100% valid Oracle package body (rotated by 90 degree). In general:

  • unknown package constant;
  • unknown type / subtype defined in a package specification;
  • unknown columns of an object type collection

SELECT a.column_a, a.column_b FROM TABLE(collection);

c) and finally I found UNPIVOT. The below SQL query runs without an issue, but ID highlighted as unresolved and UNPIVOT highlighted as an error. 

with
test_02 as
(
select 1 ids, 2 ide, 1 cid from dual union all
select 1, 3, 2 from dual union all
select 3, 2, 3 from dual
)
select distinct
id
from test_02 t2
unpivot (id for field_name in (ids, ide));


>> 3.5

Confirmed. DataGrip knows COLUMN_VALUE column of an object type. Definitely I missed the fix.

> 4

All files have a correct dialect selected. If I executed SQL file on a database schema -> the schema indicator at right bottom of DataGrip will be. Though, as I mentioned before:

  • I didn't open the connection obviously;
  • I may miss the indicator;
  • I would like to run only portion of SQL commands within the SQL file;
  • I select required statements and press Cmd+Enter

Oops, and without a dialog the DataGrip runs the query on a closed database (yep, it will open the datasource without any confirmation). And the database will be mentioned on small indicator at right bottom; just because a few month ago you have once executed the script against the database schema. I hope you see what a misunderstanding could take place with the logic.

>5 Could you provide more clear steps to reproduce your problem?

I will try to reproduce the issue and get back with details, though I see big improvements therefore the issue could be already fixed.

>7 Do you mean SQL Formatting?

Yes.

 

Best regards,

Kostyantyn

0

Hi,

Below there are some more comments:

>>3.1 Under investigation

>>3.2 Wait until DataGrip 2017.1 EAP in March or DataGrip 2017.1 Release

>>3.3
>>>>a) https://youtrack.jetbrains.com/issue/DBE-3624

>>>>b) & c) & d)

https://youtrack.jetbrains.com/issue/DBE-1525
https://youtrack.jetbrains.com/issue/DBE-3886
https://youtrack.jetbrains.com/issue/DBE-3048
https://youtrack.jetbrains.com/issue/DBE-3195
https://youtrack.jetbrains.com/issue/DBE-3754

>>>>e) https://youtrack.jetbrains.com/issue/DBE-3707

>>4 See 3.2

>>5 Under investigation

Thank you for your feedback.

0

Hi Vasily,

 

Thank you for the update.

 

Here are steps to reproduce the issue with metadata related to foreign keys in PostgreSQL.

1. All steps were performed on PostgreSQL 9.5.3

select version();

PostgreSQL 9.5.3 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit

2. Create two schemes.

create schema test_core;
create schema test_etl;

3. Common, let's say, dictionary in CORE schema.

create table test_core.test_dictionary(d_id bigint, d_txt text);
alter table test_core.test_dictionary add constraint pk_d_id primary key (d_id);

4. Create tables in ETL schema.

create table test_etl.test_a(a_id bigint, d_id bigint, a_txt text);
alter table test_etl.test_a add constraint pk_a_id primary key (a_id);
create table test_etl.test_b(b_id bigint, b_txt text);
alter table test_etl.test_b add constraint pk_b_id primary key (b_id);
create table test_etl.test_c(c_id bigint, a_id bigint, b_id bigint, d_id bigint, c_txt text);
alter table test_etl.test_c add constraint pk_c_id primary key (c_id);

5. Create foreign keys for the tables.

alter table test_etl.test_a
add constraint fk_d_id foreign key (d_id)
references test_core.test_dictionary (d_id);
alter table test_etl.test_c
add constraint fk_a_id foreign key (a_id)
references test_etl.test_a (a_id);
alter table test_etl.test_c
add constraint fk_b_id foreign key (b_id)
references test_etl.test_b (b_id);
alter table test_etl.test_c
add constraint fk_d_id foreign key (d_id)
references test_core.test_dictionary (d_id);

6. Let's check collected metadata.

7. Let's check 'Copy DDL' on test_etl.test_a table. Please check that we miss details about a referenced table. The same goes for test_etl.test_c table.

create table test_a
(
a_id bigint primary key not null,
d_id bigint,
a_txt text,
constraint fk_d_id foreign key (d_id) references
);

8. Let's check what we have in PostgreSQL. Below SQL query to check foreign keys and a screenshot with result set.

select c.conrelid::regclass::text                   as table_name
,c.conname as constraint_name
,pg_catalog.pg_get_constraintdef(c.oid, true) as constraint_definition
from pg_catalog.pg_constraint c
where c.conrelid in ('test_etl.test_a'::regclass, 'test_etl.test_c'::regclass)
and c.contype = 'f'
order by c.conname;

As we may see foreign keys with referenced table test_core.test_dictionary do exist.

 

N.B. I assume the issue can be related with PostgreSQL specific thing. Foreign key name should be unique within the table only. The same foreign key name can be assigned to constraints in different tables.

 

In addition I found a confused logic of 'Copy DDL' command.

If someone changed a table definition, as an example, a new column was added, and I did not execute 'Synchronize' against the data source in either explicit way by calling the command or in an implicit way by a DDL command. The 'Copy DDL' will give me an outdated version of the table's DDL script.

Why do I think it's a confused logic? Because I didn't expect to get an obsolete DDL script when I explicitly ask for the script. I used to get always actual DDL in other db tools. And for a theoretical discussion about is the situation possible - yes, I got it yesterday. Plus in general a database structure is stable for an application in support.

 

With best regards,

Kostyantyn

 

0

Hi,

Thank you for your feedback and steps to reproduce.
We've got similar issue https://youtrack.jetbrains.com/issue/DBE-2441.

> 'Copy DDL' command logic
We're going to make it more clear in DataGrip 2017.1.

Thank you.

0
Avatar
Permanently deleted user

I really want to like DG, but frequently revert back to Toad for DB2. It handles large queries and large result sets without locking up or burning the CPU for no apparent reason.  No ability to export headers is a big miss. Also the error messages are almost non-existent. I have the latest version 2017 (i'd double check, but it is complete locked up from running a 8K  line query.

0

@ Kostyantyn Krakovych  Hi,
Concerning DDL problem, for now DG can handle FK to another scheme.
Thank you.

0

@ Scottdonerjb Hi,
Could you explain the way you run the query? I mean from a file or from a console?
Thank you.

0

I too really want to like DG (I use CLion and PhpStorm almost all day every day, and ideaJ occasionally). But I finally gave up on the muddled UI design and went back to pgadmin. The main path use cases are too convoluted for me to fight. 

0

@ William Zeitler  Could you describe these paths?

0

@Vasily Chernov

In brief for problem things with DataGrip as Oracle development tool:

  • source code formatting is poor and jerking;
  • the tool has problems with understanding Oracle PL/SQL;
  • navigation within source code does not work and if it works it is useless;
  • no debug mode;
  • poor work multiply databases and different schemas.

More details:

1. Source code formatting is still poor. I agree it's good for Java source code but for Oracle PL/SQL code it's jerking annoying and useless. Have you seen how it damages your source code once you print ")" symbol? Just because it thinks it's a good idea to format code for WITH clause. Have you try to move line for one symbol only back with "Delete" button? But instead it jumps four symbols back just because it thinks it knows better? Yes it's can be disabled but in the case any text editor can replace DataGrip.

2. So called "intellectual" DataGrip still can't parse Oracle PL/SQL code. On 100% valid package body code is highlighted with "problems" like Christmas tree. Please check screenshot from me up the thread - nothing has changed since it was published.

  • The tool knows nothing about package constants declared in another package specification. For what reason the tool reads metadata from database if these data are not in use?
  • The tool knows nothing about object type attributes. Once again do you use metadata from database at all?
  • The tool knows nothing about pre-defined conditional compilation flags like $$plsql_unit and highlight them as error.
  • The tool knows nothing about standard Oracle packages like dbms_application_info.

3. So called "intellectual" DataGrip can't find function usage within source code (cmd+B on function name in package specification) though a simple search within source code repository (cmd+shift+f) founds dozen encounters. Once again do you use metadata from database at all?

4. When I go to declaration (cmd+b) on function call within package body - the tool opens package body where the function is. BUT what is it? What database has the tool used to get the source? No info. Even more - no active database source. Should I think the source code is outdated and was shown as it was snapshoted a few weeks ago? When you have a set of environments with different source code versions, and these databases defined as database sources - what should I think about?

5. When I type table name in package body the so called "intellectual" tool advices 5 (five) tables with the same name! The only difference is associated database. Now I see - you do use metadata info to create noise output and make development harder. Yes, this is because we have more than one environment with the same database but with different source code versions.

6. If you have a few different databases the same type in a single project - mess of false positive suggestions take place.

7. More issues could be announced if previously described is fixed and I will be able to use the tool on daily base.

Summary: DataGrip is very good for quick small SQL checks in databases from different vendors. But it is completely unusable for Oracle developer work on daily base.

P.S. Have you fixed problem with big data export? Yes I can use expdp but why do I need the tool in the case?

2

Hello Kostyantyn

First, thank you for clarification of problems in DataGrip encountered during PL/SQL development. 
You've mentioned a lot of different things, and it'd be better split them into several topics and discuss them separately.

1. Source code formatting — I'm completely agree that current code formatter does stupid things, and we're working on it now.

2. Parsing Oracle PL/SQL code — my colleague said me that some bugs already fixed. We're planning to publish an EAP build on this week, please check whether these problems still actual.

As of the "knows nothing about standard Oracle packages like dbms_application_info" — please clarify, what do you mean.

3 & 4. Finding usages of the source code — please check this in the coming build, if still actual, file an issue into our issue tracker.

5. Too many same or unrelated suggestions — I've filed an issue:
https://youtrack.jetbrains.com/issue/DBE-6036
You're welcome to watch/vote for.

When you're encounter a new problem in DataGrip you're welcome to file an issue (or even better to vote for/comment an existent one, if found).

And, DataGrip is relatively new product, it has been released by JetBrains just a couple of years ago, so we are working on bringing more and more features and bug fixes in the next releases, please stay tuned!

1

Hi Leonid,

Thank you for reply.

> As of the "knows nothing about standard Oracle packages like dbms_application_info" — please clarify, what do you mean.

1. Create new project

2. Create new data source for casual Oracle schema (not SYS)

3.Waiting for DataGrip introspecting the schema. I expect all necessary metadata including roles, privileges and etc. were calculated and processed. If DBA_... views available they should be used, otherwise ALL_.. views should be.

If function, procedure, package, view or table or whatever can be accessed by schema via direct grant or cascade of roles I expect them to be available for suggestions. As example, select on table was granted to role A, the role was granted to role B and etc (C, D, ...) Schema was granted with role D, and as long as it received select privilege on table via the chain of roles I expect it to see as suggestion.

4. As long as DBMS_APPLICATION_INFO package does available for schema source code for execution I expect it will be recognized by suggestion. Please check attached screenshots. First goes fo console. Second one for package body. DataGrip does not know these standard packages though these packages do available for execution by the schema.

I hope this explain what I mean. And yes, if one of available data sources will be SYS schema where mentioned packages are - they will be suggested. But SYS schema is not available for developers in most scenarios.

---

FYI

It took approximately 5 minutes to perform complete schema introspecting from scratch (I didn't track with timer). The schema has nothing special (609 tables, 15 views, 201 routines - why do you keep functions and procedures together?, 95 packages, 84 sequences, 329 object types, 151 collection type - oh, now you do separate types into different subtypes :) ) I treat the project as medium. Thus 5 minutes for metadata get is slow. I hope you do not read source code in full because it's useless action. If I do nothing with package - no sense to read it. If I change it - I will require to either re-read it to be sure I have latest version or I will use repository as golden source. If I need to check actual package source code I expect DataGrip will read it from database and not from metadata collected a week ago.

---

> When you're encounter a new problem in DataGrip you're welcome to file an issue (or even better to vote for/comment an existent one, if found).

Thank you for this opportunity. Unfortunately I have a bad experience with it. It took more than one year for the task to go live. It was cerated on 26 May 2016 and ... fixed in 2017.3. Actually the task was about basic thing required by anyone who analyze data in database: distinguish text from numbers in result set at first glance. As an example, if column is text but contains numbers it may lead to poor SQL query performance or issues related to implicit data type transformation.

 



Best regards,

Kostyantyn

 

DataGrip 2017.3.7
Build #DB-173.4674.7, built on February 27, 2018
Licensed to Kostyantyn Krakovych
Subscription is active until September 24, 2018
JRE: 1.8.0_152-release-1024-b15 x86_64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Mac OS X 10.13.3

0
Avatar
Permanently deleted user

I try use oracle with datagrip, but i am very disapointed.

1

Hello, Kostyantyn

Thank you for explanation. In order to get suggesting SYS packages working — just add schemas SYS and PUBLIC and sync them. When introspection done, suggestions will work.

As of introspection time — you can make it faster granting 'select any dictionary' to the Oracle user you're logging in with. Unfortunately, the catalog with "ALL_" views is really very slow, we cannot fix or workaround it.

As of users and roles — DG doesn't work with admin entities yet, it's planned on future versions.

0

Hi Leonid,

Example from real life. Project has database with 15 different schemas with source code (proxy schemas were not counted). And for development, testing and etc. 10 environments are in use. According to current interface, user is forced to perform additional actions 150 times just to make the code base looks valid though it is so. And ... some projects use another strategy - all environments in the same database but schemes have digital index like APP_01, APP_02, or ending like APP_DEV or whatever. In the case I will not be able to create data sources with copy command because different settings for different environments.

System privilege select any dictionary is dangerous and can not be granted just because of existent policies.

For slow metadata receiving, might be it make sense to analyze less or source code in repository on local computer? At least give an option. If Oracle project is not a home toy, source code will be in repository.

Best regards,

Kostyantyn

 

0

Hello Kostyantyn,

As of the 'select any dictionary' — it's not really dangerous, it doesn't allow damaging something because it's a read only privilege. It allows just to read system catalog that is faster than the user's one.

We'll discuss how can we reduce the number of analysed entities (in order to reduce introspection time), but it disables some DataGrip features (code validation, navigation, script generation).

Which tool you're using for developing on PL/SQL? Which tool you can suggest to use?

P.S. The following phrase is not clear for me:
> In the case I will not be able to create data sources with copy command because different settings for different environments.
Could you please clarify it?

0

Hi Leonid,

1. For the privilege. One of Oracle’s legendary gaffs in version 9i displayed cleartext passwords for database links in SYS.LINK$. For version 11g you can see password hashes in SYS.USER$. Since version 12c Oracle has locked down this type of access even further. The privilege allows uncontrolled access to Oracle's data dictionaries. Thus select any dictionary is dangerous. Nobody grants the privilege to developer if security does matter. Well known policies 'least privileges' and 'need to know' are still actual.

2. For reduce number of analyses entities. It is up to you though, as I said before, please give a chance to source code in repository on local computer. For environments like QAT/UAT we have automated deploy only, therefore full schema introspection is required for source code after each deploy.

3. For tools. For Oracle database development I use Allround Automations PL/SQL Developer. I use it in single connection mode to avert any accidental object compiling or script execution on wrong database. It has smart tabs, source code formatting (far away from ideal but I didn't see a better as yet), source code navigation, autocomplete, very pleasant connection menu with hierarchy, debugger (it's required one time in a year but ...), schemes diff and etc. Colleagues, in general, use the same though some prefer Toad for Oracle, some use PL/SQL Developer in multi connection mode. At my point of view Toad works better for administration tasks and has a problem - it is buggy. Also Oracle SQL Developer is slightly going better and better and it can extract huge data volumes into CSV / Excel without failed attempt to read full result set into memory like others.

4.  For "The following phrase is not clear for me" - I meant Duplicate (CMD+D) command for data source. If I use it, a new data source connection will have the same set of enabled schemas. Therefore if DEV environment has _DEV ending for schema names, and TEST has _TEST, I can not duplicate data source with minimal changes. I am forced to perform a full set up for at least for one data source.

 

Best regards,

Kostyantyn

1

Have a look at my issues guys. Not only does DataGrip have problems saving changes (all changes lost, not applied, no-one knows why) but it also thinks I am creating duplicate columns when I'm not. Great work guys :-(

1

@Dave Thompson what issues are you talking about?

0

I know, it's an old thread, but I completely agree with the topic starter, even now, with 2020.2 version I barely can rely on DG when we're talking about production. For instance, the recent issue I faced: I'm updating a routine source code formatting, press on Submit to DB and IDEA (I assume, it is based on the same DG engine) removes the routine! Now imagine that's happening on production. Further details: DBE-11166. Or, I press "Submit a change" to push a routine's code, IDEA updates the code, but the routine's tab remains highlighted (in IDEA it means that the file contains a change, that hasn't been pushed to a repository/Git yet).

Along with other issues and limitations I've faced during the testing, such as can't create a routine in DG, can't control a table/routine version, no local history or Git-alike VCS, I have to admit, at the moment, I would not take a risk and connect DG to a production DB. Unfortunately, I have to stay with MySQL Workbench.

I would feel more confident if DG would propose the same reliability and code/DB version control as IDEA offers for Java/JS.

2

It seems that the problem with resolving some fields in packages mentioned in 3.4 is still not completely solved.

For example:

- I have decalred a record type in a apckage specification with mutiple fields.

- If I use the type for a parameter of a function or procedure the field of the record type is not resolved properly.

 

What is planed/are the planed features in DataGrip for the future?

It still seems like a very unfinished product in a very early development state.

0

Mike B.,

We've fixed the issue you mentioned!
Thank you for your feedback!

0

Andre Zeiler,

Could you file an issue in DataGrip bug tracker with some examples?

0

vasily chernov

Thank you for reaching out to me.

I have create a new issue (DBE-13188 Resolving of PL/SQL record type fields does not work in most cases.). I hope the example is detailed enough.

I for myself have immediately stopped working with and trying out Data Grid after I saw that most of my code was highlighted and hundreds of "errors" where wrongly detected by Data Grid. I think the tool has huge potential but at least the basics need to work properly. Maybe I give it a try again in the future.

To be able to use it I "only" need a Code Editor that can parse my Code, highlight it properly, assist me during coding, allows debugging using at least DBMS_DEBUG, a good automatic code formatting, a way to compile to my databse(s), matching of my source files in the database and on my change tracker (git, subversion). If those things are possible and work properly I can start using Data Grid and then I am also able to contribute to the development of DataGrid by adding new issues and (nice to have) feature request.

0

Andre Zeiler,

Thank you for your feedback.

>I "only" need a Code Editor that can parse my Code, highlight it properly, assist me during coding


We're continuously working on it. If you've got more highlighting issues it would be great if you share examples. You can reach me via e-mail Vasily.Chernov@jetbrains.com

 

>allows debugging using at least DBMS_DEBUG
Have you tried our Oracle debugger?

 

>good automatic code formatting

We've got one. Do you miss something?


>a way to compile to my databse(s), matching of my source files in the database and on my change tracker (git, subversion).

I didn't get it, could you describe it?

0

Hey vasily chernov

Thank you for your reply.

My last comment was very general, not focused on DataGrip. I just listed the "features" I need to be able to work with a IDE/tool like DataGrip to develop code/features for our Database.

The first thing I noticed after opening my project source files in DataGrip was the code parsing that failed. And because of that compiling and deploying the code to our database was not possible. This is a major issue and I was not really able to deeper investigate and use DataGrip and its features. I stopped there and opened a new Issue in your BugTacker.

I am aware that some of the mentioned features (like the Debugger) are already implemented in DataGrip but because my code is not accepted by it I also was not able to test them yet. After the Bug I reported is fixed I will reopen my project and look at the tool again (If my trial licence is still valid) and I will open new issues if I find some other problems.

> matching of my source files in the database and on my change tracker (git, subversion).

My source file are located in a subversion repository. I can check them out to my local hard drive and open them in DataGrip. From there I can compile and deploy them to the connected Database.

For my "Workflow" it would be optimal to always open and work with the src files from the FS. I need to be able to modify them, compile and deploy them afterwards and I also want to perform the debugging from those files directly. The dedicated "Databse View" could be synced to the source files and should not open a own "tab" in the code editor. Maby this could be forced to compare the files but during normal development (coding) they could be synced. I am not sure how this currently works in DataGrip at the moment but this is how I would like to see the implementation of it in an IDE. 

> good automatic code formatting

I will try it after my code can be parsed :) 

In general I really would like to be able to use the IDE.

0

Hello Konstyantyn,

as of the formatting issue — I've check that the formatter of the latest DataGrip version can make code to the form very similar with one from your example:

The only difference is a space after commas in the 'comma-begin' mode. I hope this little difference doesn't become a show-stopper for use DG formatter :)

0

For me, it is absolutely useless. I am using RDS/Postgres and the issue is not about the features but because it is slower than pgadmin, slower than dbeaver, slower than tableplus and it is so slow at performing anything that I have the time to load another tool, log in and perform the operation before it's done with introspections.

The core issue here is endless ultra-slow introspections. I'm doing some with with a db that is in Tokyo while being in Malta, the latency is generally really bad but Datagrip is taking this on a whole over level; it's not like it's a bit slower, or 2x slower, it is impossible-to-use slow with a high latency connection.
What I do not get is why none of the other tools have the same issue with introspections, they're all very similar in performance and only DataGrip stays suck like nothing is happening.

Today I needed to do a drop cascade on a schema, I started it, go to another floor to get a snack, I come back and it's stuck on some introspection. I load pgadmin, log in and drop the schema in 30 sec. The main reason I still have datagrip is because it has all the accounts set up in it already. As I'm moving them one by one to another tool, I need it less and less.

0

Please sign in to leave a comment.