Next Gem AI's Custom LLM
Last updated
Last updated
By customizing a Large Language Model (LLM), we aim to refine our initial output and enhance the value returned to our users. We store all the information processed by various AI systems in our MongoDB database.
This allows us to create a specialized version of the LLM that is well-informed about the collective knowledge each AI has generated on specific projects. Consequently, users can access detailed insights through this customized LLM, tailored to provide precise and comprehensive project analyses. All the scores from these previous different models are cross-referenced. Our custom model will assign its score derived from an overall average score: all this data will enable us to establish a Trustability factor, allowing you to invest in any project with the best data available on the market.
We are making it by breaking down each document into smaller text segments, known as chunks, and then converting these chunks into embeddings. Embeddings are numerical representations of text that capture the essence of the information and can be processed by machine learning models. These embeddings are stored in a vector database to facilitate efficient searching and retrieval of similar embeddings.
When a user has a question, the system embeds it to understand its context and content. It then searches the vector database for chunks of text whose embeddings are similar to the question’s embedding. The relevant chunks contain information that can help answer the user’s question. Finally, the system sends both the original question and the retrieved chunks of text as context to an efficient language model; it then uses this information to generate a coherent and contextually relevant answer to the initial question. This process allows for a more informed and precise response by providing the language model with a tailored context extracted from our MongoDB database of text embeddings.