TEL AVIV, Israel, Aug. 11, 2025 (GLOBE NEWSWIRE) -- Tabnine, the originator of AI-powered software development, today announced support for the newly announced NVIDIA Nemotron reasoning models as Tabnine continues to bring the best models to its customers while meeting enterprise needs for accuracy, efficiency, and control.
The new Nemotron reasoning models enable more intelligent, scalable, and cost-efficient enterprise applications join Tabnine’s growing gallery of supported models. Nemotron offers a high-performance option for teams building intelligent software in secure, self-hosted, or hybrid environments.
Built with open, commercially viable datasets and optimized for NVIDIABlackwell architecture, the Nemotron family of models — including NVIDIA Llama Nemotron Super 1.5 and NVIDIA Nemotron Nano 2 — enable enterprises to build robust agentic AI systems with powerful reasoning capabilities and efficient compute performance. Tabnine is integrating these models into its enterprise-grade AI development platform to give customers access to high-performance reasoning models for building smarter AI agents and to offer customers greater control, performance, and efficiency across the software development lifecycle.
“Reasoning is the next frontier in developer productivity, and the NVIDIA Nemotron models help us cross that threshold,” said Dror Weiss, CEO and Co-founder of Tabnine. “By combining Tabnine’s secure, fine-tuned AI environment with Nemotron’s industry-leading performance, we’re enabling enterprises to build and deploy intelligent agents faster, without compromising privacy, control, or accuracy.”
Enterprise-grade models, ready for real-world scale
The Nemotron models are designed for high-throughput inference and seamless deployment as NVIDIA NIM microservice containers, giving Tabnine the flexibility to serve a broader range of enterprise customers, from cloud-native teams to air-gapped, on-prem environments. With up to 250 concurrent users supported on a single NVIDIA H100 GPU, and faster token generation for lower total cost of ownership, Nemotron dramatically improves time-to-value for enterprise AI investments.
“Enterprises are no longer just experimenting with AI agents. They’re deploying them across real workflows. Nemotron helps us meet that demand with reasoning capabilities that scale,” said Weiss. “Tabnine is excited to continue working closely with NVIDIA to bring the best-performing models to our customers, especially those operating in secure, self-hosted environments.”
Open models, built for control
True to Tabnine’s commitment to privacy-first, customizable AI, the integration with Nemotron complements the company’s platform approach: fully open model weights, transparent training data, and enterprise-grade deployment options. Tabnine customers can take advantage of the NVIDIA NeMo platform for building, deploying, and continuously improving agents, unlocking a complete lifecycle for secure AI adoption.
This collaboration continues Tabnine’s longstanding alignment with NVIDIA. Tabnine last year announced support for NVIDIA NIM, bringing containerized deployment to customers across cloud, hybrid, and secure infrastructure. Today’s announcement adds best-in-class reasoning to that foundation and underscores the companies’ shared commitment to enterprise-grade AI.
For more information about Tabnine’s enterprise AI platform, visit www.tabnine.com.
About Tabnine
Tabnine helps developers and enterprises accelerate and secure software development using generative AI. With over one million monthly users and deployments across thousands of organizations, Tabnine’s private, open, and secure AI coding assistant integrates seamlessly into every stage of the development lifecycle. Tabnine is trusted by leading engineering teams to increase velocity, improve code quality, and ensure full control over AI adoption.
Learn more at www.tabnine.com.
Contact
press@tabnine.com
