New AI Assistant - using own endpoint


Is it possible or will it be possible to use our own endpoint for the AI assistant? As far as I have understood the documentation right now one can only use the model hosted by jetbrains?

This would make a lot of things much easier for us


Currently, it is possible to use only the default cloud LLM provider/model used by AI Assistant. For now, it is OpenAI (GPT-3.5 + GPT-4), however we are evaluating and testing other providers/models which will be added in the future. 

As for configuring custom LLM providers - Enterprise AI solution will cover this scenario. Enterprise AI is not yet available. Using custom cloud LLM providers is planned to be implemented in April. And support for custom local LLM providers is currently planned to be supported by the end of the year.


Thank you! Sounds great, I'm really looking forward to what you will publish in April!


Please sign in to leave a comment.