Why is the future about AI agentic workflows?

Discover more about how AI agentic workflows will set a new standard for performance, accuracy, and productivity in enterprises.

Why is the future about AI agentic workflows?

AI in Enterprise: from RAG to AI Agents

While enterprises have started to adopt AI and GenAI to improve business workflows thanks to their ability to understand and use natural language for a variety of activities and tasks, implementing AI in production comes with many challenges and concerns – particularly regarding the accuracy and reliability of AI outputs.

As large language models’ (LLMs) knowledge solely relies on their parametric memory and can generate factually incorrect answers, the concept of "Retrieval Augmented Generation" (RAG) has come into the spotlight as a promising approach to enhancing contextual response accuracy.

RAG is designed to leverage a vast corpus of text data and retrieve up-to-date and verifiable information relevant to a given input, and then incorporate the retrieved information into LLMs, enabling them to stay accurate. In enterprise applications, it gains some success in several tasks, such as Question-Answering (QA), helping users search for specific insights.

Consider an AI-application that is designed to help financial analysts answer questions about a company. With a well-designed RAG pipeline, analysts can answer questions like:

🗨️
“What is the total number of employees for FY2023?”

This information can be easily extracted from financial statements or reports and given to a model to generate a contextual answer.

Retrieval Augmented Generation (RAG)

One of the main limits of RAG in production is that the retrieval is performed once for each question. The model is prompted to generate a final output, token by token, in a zero-shot mode.

As such:

  • The context from retrieval is unchangeable once provided and cannot be expanded;
  • If the retrieval result is irrelevant, the LLM lacks a feedback loop mechanism to correct its answer, which may resort to hallucinating an answer;
  • The LLM does not take into account the fact that research results might suggest further investigation.

Yet, some daily tasks require multi-step research, involving iteration and verification.

Consider now a query like:

🗨️
“What is the evolution of the total number of employees of the company over the last five financial years? Can you provide a graphical representation of the trend?”

This is the type of question a financial analyst would want answered. While prompt engineering can be useful, and unless the answer is already provided (which is more unlikely), just using an LLM alone might not be enough to answer this more complex query, as immediate knowledge is lacking.

This query might require a multi-step approach, breaking down the task into simpler sub-parts (e.g., identify the relevant number of employees each year, estimate the trends, generate a graphical representation, etc.).

This is where a more human approach is needed using multi-searches or multi-steps before providing a final answer, such as gathering useful information, determining whether further investigation is required and the subsequent search, assessing and self-iterating, etc.

The solution: AI Agents.


Understanding the concept of AI agents

While there isn’t a widely accepted definition, there have been attempts to describe the framework of AI agents. In a broad definition, they can be described as a system with reasoning capabilities, memory, and the necessary tools to execute tasks.

The fundamentals of AI Agents are not completely new, as they are based on a structured prompt chaining concept, which generally requires handcrafted rules. However, modern AI agents innovate in the way they interact with a user using a system-defined persona, use an LLM to reason from a specific query and create a dynamic plan to answer and execute it with the help of different tools or applications.

Given a user request, the AI Agent comes up with a plan to solve the problem by answering questions, such as "Which tools should be used," and "In what order should they be used." It determines when it needs (or not) to conduct research using the selected tools, formulate one or multiple search queries, review the results, or seek clarification, and decides when to provide a relevant answer to a specific query.

AI Agentic workflows

The main idea of AI agents is to use LLMs to choose a sequence of actions to answer a specific query.

AI agents streamline the traditional implementation of RAG systems, where each query is handled separately.


AI Agent architecture’s components

The reasoning skills of AI agents come from different components:

Core components of AI agents
  • Core agent: It refers to the control component that decides the core logic and behavioral characteristics of an AI Agent. It is generally profiled or assigned a persona through various methods.
  • Planning: Complex problems often require a nuanced approach. This complexity can be managed by using task and question decomposition, where the agent breaks down complex tasks into smaller and manageable subtasks, in a multi-step plan to achieve a goal.
  • Tools (or Actions): They are workflows that agents can use to execute tasks. For instance, agents can use a RAG pipeline to generate context-aware answers, an API to search information over the internet, a code interpreter to solve programmatically task, etc.
  • Reflection: The agent performs what is generally called “self-criticism” or “self-reflection” over its previous actions, learns from mistakes and refines if necessary to improve the quality of the output.
  • Memory: The memory module emulates human memory processes, and enables agents to make more consistent, reasonable, and effective decisions. There are different types of memory modules, notably short-term and long-term memory, where short-term memory enables the model to remember details from previous steps, helping to maintain coherence and context in the outputs, and long-term memory provides the agent with the capability to retain and recall information over extended periods.

These are the possible components of the hypothetical AI agent, but there are still other important considerations and a multitude of developments, notably regarding multi-agent collaboration, where more than one AI agent works together, acting with different roles (product manager, designer, customer service, etc.), splitting up tasks and discussing to come up with better solutions than a single agent would (even if the outputs might be more hazardous in such a situation).


Challenges

While AI agents have been recognized as intelligent entities capable of accomplishing specific tasks, the field of AI agents is still at its initial stage, and several significant challenges need to be addressed in their development, some being similar to “traditional” AI applications, while others being more agentic specific.

Among the challenges, we can mention prompt robustness and reliability, as AI agents involve an entire prompt framework that might encounter reliability issues, or hallucination, as agents interact with external components that could introduce conflicting information. Some are also related to the efficiency of the actions, considering the number of requests that might be needed (which involves a cost-predictive solution), context length due to the restricted context capacity limits which might limit mechanisms like self-reflection, but also challenges in long-term planning and task decomposition. Agents might struggle to define and adjust plans, notably compared to humans. Finally, it’s important to remind that AI agents are only good as they can easily access the necessary tools and applications.


Conclusion

In real-world scenarios, tasks and workflows are generally characterized by complexity and variability, thereby addressing complex tasks through a one-step planning process might not be enough.

The agent designs are becoming more and more refined and ready for production considering proprietary and open-source LLMs start now to reach a performance level that makes them suitable for powering agent workflow capable of making decisions in real-world and complex scenarios.

The introduction of agentic workflows in enterprises set a new standard for performance, accuracy, and productivity.


At Lampi, we securely provide a secure AI-powered platform with the best and latest LLMs to power predictable and fine-tuned AI agents that pick the relevant information from your data and web, reason, iterate, and tackle complex tasks.

Ready to transform how you work with AI? Contact us to request a demo.


Don't forget to follow us on LinkedIn, Twitter, and Instagram!