Skip to main content

Rambase NLP Agent

Hatteland NLP Agent for Rambase Enterprise Applications

The Rambase NLP Agent is a tool developed through the integration of AI to transform workflow automation using natural language commands. This innovative solution empowers end-users to easily execute complex workflows with simple natural language instructions. Through a seamless integration process, the tool becomes an asset for optimizing productivity and efficiency after training with the API specification file

The Client - Hatteland

RamBase is a cloud-based Enterprise Resource Planning (ERP) solution designed to empower businesses in the manufacturing and distribution sectors, enabling them to oversee their entire value chain – spanning from sales and production to delivery. Founded in 1992, RamBase represents a Norwegian-developed Software-as-a-Service (SaaS) system. Uniquely, it was conceived in the cloud and is delivered through certified partners with extensive industry expertise. RamBase offers a comprehensive business solution, providing a cohesive, integrated system that harmonizes, streamlines, and simplifies all core operational processes.

Objectives

The project's primary objectives revolve around enhancing user productivity, improving the user experience, and reducing the onboarding process for new employees. The main goal is to increase user productivity by simplifying applications and eliminating unnecessary complexities, enabling users to complete tasks more efficiently. Furthermore, there is a commitment to enhancing the overall user experience by making interactions with the tool more intuitive and user-friendly. Simultaneously, the aim is to minimize the onboarding time for new employees through the implementation of the Rambase NLP Agent, ensuring a swift and efficient transition to full productivity within the organization.

Intent Identification

The intent identification component plays a pivotal role in discerning the user's intent behind a given natural language command. It selects the most suitable endpoint from the available options. This process is achieved through a vector similarity-based approach, ensuring the accurate identification of the endpoint that aligns with the user's request.

Entity Extraction

The entity extraction module retrieves the schema associated with the selected endpoint and populates it using data extracted from the natural language command. This crucial step facilitates the precise interpretation of user instructions, ensuring the correct execution of tasks.

Fine-tuning

The fine-tuning module is instrumental in enhancing the accuracy of entity extraction. It leverages a combination of few-shot learning techniques, endpoint-specific rules, and lookup modules for related data. This refinement process ensures that the system consistently understands and interprets natural language commands with exceptional precision.

Execution

In the execution module, the selected endpoint is invoked with the data obtained from the entity extraction module. This component is responsible for handling the authorization required for the API endpoint execution, ensuring that operations are carried out securely and in accordance with access control policies. It is the final step in transforming natural language commands into actionable results within the Rambase system.

Challenges and Solutions

Handling a Large Number of Endpoints

Rambase's extensive library comprises more than 1200 API endpoints, posing a complex challenge in identifying the most appropriate endpoint for a given natural language command. To tackle this challenge, a context-integrating scenario was implemented. This approach considerably enhanced the accuracy of intent identification by taking into account the context of the command, ultimately strengthening the system's capability to choose the correct endpoint.

Dealing with Different Endpoint Schema Formats

The API schema objects displayed diverse formats for various fields, demanding the conversion of data from natural language commands into the suitable format. In addressing this matter, the team harnessed the capabilities of few-shot learning, empowering the system to adapt dynamically to varying schema formats and ensure data compatibility.

Reducing Execution Time

Efficiency emerged as a key priority, demanding the minimization of execution time for natural language commands. To accomplish this objective, a strategy was formulated to curtail the number of language model (LLM) calls for each command. This strategy encompassed the implementation of diverse optimization techniques across the system's components, resulting in a notable reduction in LLM calls and a substantial enhancement in response times.

Retrieving Foreign Referenced Data

Specific endpoints required the retrieval of foreign referenced data from the database and its integration into the extracted object. To address this challenge, a lookup module was introduced, displaying the capability to efficiently retrieve pertinent foreign data. This module seamlessly integrates supplementary information into the responses, guaranteeing the precise and efficient execution of complex tasks involving foreign data dependencies.

Results

The team achieved the client's objectives by delivering an AI-integrated tool capable of executing tasks through simple natural language commands. The tool effectively executed workflows with a high degree of accuracy, resulting in a notable improvement in user productivity and an enhanced overall user experience. This solution serves as a testament to the potential of innovation and AI integration, perfectly aligning with the client's vision and requirements.