AI Installation
Last updated
Last updated
AI Installation Guide, your comprehensive resource for setting up and configuring our AI solution. Our AI system represents a cutting-edge technology that influences machine learning and advanced algorithms to analyze data, derive insights, and enhance decision-making processes within your organization.
CPU Cores: Depends on workload; recommended at least 2 cores or more.
RAM: Depending on workload, recommended at least 4 GB RAM or higher.
Disk Space: Sizing needed for each customer workload; recommended at least 50 GB or more.
Public Ports: 8000
Acquiring Installer Source Code Files on a Linux Server
Before proceeding with the installation, it's essential to acquire the source code files necessary for setting up the system. These files typically include:
.env.conf: Configuration file containing environment variables and settings.
docker-compose.yml: YAML file defining the Docker containers and services required for deployment.
install.sh: Shell script responsible for executing installation procedures and configuring the system.
To acquire these files:
Log in to your Linux server using your credentials.
Identify the source code repository or directory where the installation files are stored.
Use commands such as git clone
or wget
to download the source code files onto your Linux server.
The .env.conf file contains essential configuration settings, including service ports, which need to be adjusted according to port availability and specific requirements. To edit the .env.conf file:
Please be informed that 11 ports must remain available to facilitate unified communication among the system's services. This is essential for ensuring continuous operation and efficient exchange of data within the system architecture.
To obtain the source code of the AI_Interface, proceed by navigating to the following two files:
env_config.yaml: This file allows you to specify the agreed-upon ports for each service.
Config.ini: To configure the IP of the portal.
Acquiring the Trained Model:
By default, the trained model should be located in a folder named NER_model_server, which shares the same parent directory as the AI_Installer folder.
As showed in the previous image, it's confident that the two folders, namely NER_model_server and AI_Installer, exist in in the same directory. In the event that they are not, you'll need to edit the docker-compose.yml file in the AI_Installer directory to specify the correct location of the model.
Installation Process:
After completing the three previous steps, return to the AI_Installer directory and initiate the installation process by executing the following command in the terminal on your Linux server:
Running this command will automatically install the necessary requirements and launch the services as configured.
Verification of Service Status:
Upon executing the installation command, you can verify the status of all services by examining the printed output. Look for indications such as "done," "created," or "up to date" as shown below:
"done": Indicates that the service setup process has been successfully completed.
"created": Signifies the creation of new services as part of the installation process.
"up to date": Confirms that the services are already up and running and are currently at their latest versions.
Reviewing these indicators in the printed output ensures that all services are operational and functioning as expected.
You can check if the AI service is running by opening your web browser and visiting the URL format <server_ip:assigned_port>. For example, if your server IP is 127.0.0.1 and the port is 7000, type "127.0.0.1:7000" in your browser.
Once you access this URL, you'll see the Swagger documentation. The Swagger page provides details about the AI service's available API endpoints and functionalities.
Please see the image below for a visual guide to navigating the Swagger documentation.