[UPDATE] Announcing Lampi V2.0

[UPDATE] Announcing Lampi V2.0

Today, we’re excited to announce that we are releasing Lampi v2.0, our biggest update so far, purpose-built for business use cases.

We’ve made numerous improvements thanks to your feedback.


By merging AI agents with fine-tuned and imbricated open-source models, Lampi v2.0 securely provides a modular and multimodal platform with the best and latest LLMs to power predictable and fine-tuned AI agents that interpret your needs and reliably perform tasks from start to completion.

What’s new? 

From chatbots to powerful AI agents 

Since the beginning of Lampi, we have developed an advanced enterprise search solution, offering exceptional performance for retrieval-augmented generation (RAG) applications in enterprise, accessible through a conversational chatbot where any user can retrieve specific information from enterprise data. 

RAG is designed to leverage a vast corpus of text data and retrieve up-to-date and verifiable information relevant to a given input, and then incorporate the retrieved information into LLMs, enabling them to stay accurate. In enterprise applications, it gains some success in several tasks, such as Question-Answering (QA), helping users search for specific insights. However, one of the main limits of RAG in production is that the retrieval is performed once for each question, which explains why RAG applications might not be enough to answer complex and daily queries.

This is why we have developed powerful AI Agents that are able to reason from a specific query and create an action plan to answer and execute it with the help of different tools or applications. In other words, given a user request, our AI agents orchestrate and analyze the task, breaking it down into logical sequences using their reasoning capabilities and come up with a plan to solve the problem by answering multiple questions, such as "Which tools should be used," and "In what order should they be used.", etc. It determines when it needs (or not) to conduct research using the selected sources (“Did I gather enough information before answering?”), formulate one or multiple search queries, review the results, and decides finally when to provide a relevant answer. 

Thanks to their abilities coming from our fine-tuning expertise, we now provide an added level of precision and accuracy beyond other RAG systems. 


Pre-trained agents

Identifying use cases that can really infuse innovation, efficiency and ROI into an organization – the “golden” use cases – remains challenging for every industry. 

This is the purpose of our pre-trained AI agents; performing daily and identified tasks that really create value for enterprises in different industries, starting with corporate finance and consulting. 

Our Agents are fine-tuned and trained to perform specific tasks in these specific professional services. They are experts to perform domain specific tasks, such as investment analysis, market analysis, profitability analysis, etc., and provide highly detailed and accurate answers based on internal and external data. 


Enhanced search capabilities 

Our enterprise search system has been improved, offering exceptional performance – using semantic search and advanced innovative techniques. 

​​We continue to improve it to deliver one of the most advanced solutions available.

Lampi is now able to process and analyze massive volumes of structured and unstructured data, while respecting enterprise data permissions, and can also be augmented with external data from the internet, including from financial APIs that we are currently integrating to make Lampi a complete platform for professionals. 


Imbrication of fine-tune models

Lampi V2.0 provides a dynamic arsenal of meticulous state-of-the-art tested and augmented open-source models, each specifically implemented for each of our features and fitting exactly your needs.

This flexibility ensures to always implement the latest and best LLMs to maximize AI outputs over time and to deploy a relevant model, depending on the task. 


Voice-to-text and voice-to-voice interaction 

You can now speak (voice-enabled) to Lampi as with any other colleagues. Lampi recognizes your voice and transcribes it to interact with LLMs and AI Agents. 


Enhanced meeting AI assistant

Our AI meeting assistant provides transcripts from your online and offline meetings. With Lampi V.2.0, you can now chat with your transcripts to retrieve specific information from your meetings, and integrate our transcripts into your AI workflow. Imagine simply asking our AI agent to craft a service proposal based on meeting notes and the documents you received for a specific task. 


  • Better UI/UX
  • New available models
  • New connectors
  • etc.

Lampi maintains a core focus on: 

  • confidentiality and data security for our AI technology – it can be accessed through a SaaS model, on-cloud infrastructure, or private deployments (VPC and on-premise); and 
  • personalization:  vanilla LLMs can generate answers on a broad range of topics but they often need some level of customization, which augments the model with in-depth industry knowledge to provide more accurate and relevant responses. We provide the expertise to leverage and fine-tune state-of-the-art LLMs with your enterprise or domain knowledge, enabling specialized business solutions. 

We look forward to helping more enterprises leverage AI from proof-of-concept projects to ready-to-production deployment.

We hope that our update enhances your experience and help you achieve more

More to come soon...


Ready to transform how you work with AI? Contact us to request a demo.


Don't forget to follow us on LinkedIn, Twitter, and Instagram!