Home

Awesome

A moqui-wechat component

To install run (with moqui-framework):

$ ./gradlew getComponent -Pcomponent=moqui-wechat

To test run(use --tests ""*WechatServices" match the case):

$ ./gradlew :runtime:component:moqui-wechat:test --tests "*WechatServices"

To build the moqui-wechat component and integrate a suitable AI tool, here are the steps and AI options you should consider:

To integrate Ollama with Llama 3.1 into your moqui-wechat component using the ollama4j plugin, here is a refined and clear description of the process:


Step 1: Choosing Ollama with Llama 3.1

You’ve selected Ollama with Llama 3.1 as your private AI service due to its data privacy advantages and strong natural language understanding. This ensures all data interactions remain secure and within your infrastructure, making it ideal for sensitive ERP environments like Moqui.

Advantages:

Step 2: Integration Using ollama4j Plugin

You will integrate Ollama into Moqui-WeChat via the ollama4j plugin, which facilitates API interactions between the WeChat interface, Moqui’s ERP, and Ollama’s AI capabilities.

  1. WeChat Request Handler:

    • Create a request handler in Moqui that captures questions from WeChat users, authenticates them, retrieves user roles, and sends the query to the Llama 3.1 model via the ollama4j plugin.
    • Example: A WeChat user asks about their department’s inventory status. Moqui fetches the user's role and filters the request based on their permissions before querying Llama 3.1.
  2. Response Processing and Filtering:

    • After receiving the AI’s response, implement logic in Moqui to filter the information based on the user’s permissions (role-based access), ensuring users see only the data they are authorized to access.
  3. API-Based Interaction:

    • Use the ollama4j plugin to handle API calls between Moqui and Ollama. When a WeChat user asks a question, the plugin sends the query to the Llama 3.1 model and returns the filtered result to Moqui for further processing before displaying it on WeChat.

Step 3: Customization for Role-Based Access

Moqui’s Party Authority system must be integrated with the AI responses to ensure that users only see information permitted by their roles.

Step 4: Data Training and Maintenance

To ensure the AI stays updated with current ERP data, you will need a regular training schedule for the Llama 3.1 model.

Step 5: Implementation Plan

  1. API Setup: Use the ollama4j plugin to establish API connections between Moqui, WeChat, and Ollama, enabling smooth data flow for natural language queries.
  2. Role-Based Filtering: Implement Moqui’s logic for filtering responses based on the user’s Party Authority roles.
  3. Regular Data Training: Build a pipeline that regularly trains Llama 3.1 with ERP updates to ensure accurate and current AI responses.

This approach enables a private, secure, and scalable AI-powered WeChat interaction system within the Moqui ERP environment using Ollama with Llama 3.1 and the ollama4j plugin.

WeChat public account AI integration

pay attention to the model llama version's params difference. llama3.1 / llama3.2 need update the ollama jar for the new version of the model.