GigaML, a company dedicated to facilitating on-prem deployment of Large Language Models (LLMs) for enterprises, has secured $3.6 million in seed funding.
This funding round was led by Nexus Venture Partners and saw participation from Y Combinator, Liquid 2 Venture, 8vdx, and prominent angel investors like Garry Tan, President and CEO of Y Combinator.
As enterprises increasingly seek to deploy AI and LLM-driven solutions for a wide array of internal and external applications, they face critical challenges related to data security and compliance. Open-source models have emerged as strong contenders for enterprise adoption.
Giga ML provides the solution for enterprises to customize and deploy LLMs, allowing businesses to have LLMs as robust as GPT-4 directly on their own servers. This eliminates the need to transmit sensitive information to external servers, such as those used by OpenAI, and offers unmatched customization and efficiency.
Giga ML enables enterprises to take its base model and further refine it through pre-training and fine-tuning to meet their specific requirements. The platform also boasts its own inference optimization algorithms, ensuring superior performance and substantial cost savings.
Giga ML’s X1 Large 32k model is a pre-trained and fine-tuned iteration of the llama2 70B 4k model, designed for enhanced performance and capabilities. Notably, Giga ML has achieved the milestone of fully fine-tuning Llama 2 with a 32k context length, a feat that well-funded competitors like Mosaic ML and Together AI have yet to accomplish.
Since its launch, Giga ML has seen overwhelming interest from a diverse range of enterprises, leading to the initiation of a waitlist. Most notably, healthcare, legal, and finance organizations are eager to leverage Giga ML’s offerings. Just 15 days after its public launch, the company’s website has already risen to become the fourth most-visited among startups in Y Combinator’s S23 batch.
As of today, Giga ML sees enterprises adopt their platform for use cases like customer support and internal knowledge search and code generation for the productivity of engineering teams. However, they anticipate the use cases to widen, and LLMs will be adopted and implemented for almost every function within an organization in the coming months.