Skip to content

Integrate

Environment Setting

  • Development is carried out through Visual Studio.
  • To utilize DLLs for development, you need to configure the related settings according to the following guide.

1. Visual Studio

Dll File Name Task Configuration
DllEngine.dll Image Restoration Release x64
DllEngine_d.dll Image Restoration Debug x64
  • Specify the Configuration based on the table above.

  • Using Visual Studio 2022 is recommended

    • Execution method for different versions of Visual Studio

      • including an example for Visual Studio 2017

      1) Project Properties - Configuration Properties - General - General - Change Window SDK Version

      • ex) 10.0.17763.0

      2) Project Properties - Configuration Properties - General - Project Defaults - Change Platform Toolset to the desired version

      • ex) Visual Studio 2017 (v141)
    • If there's a failure to load the DLL

      • Project Properties - Configuration Properties - Debugging - Environment, check the DLL path
      • ex) PATH=$(TargetDir)......\dll;$(ProjectDir)....\dll;%PATH%

2. Dependency

  • Dependency information required for integration

  • dll

    • Project Properties - Configuration Properties - Debugging - Environment
      • Add "PATH={DllEngine.dll directory path};%PATH%"
      • ex) PATH=$(TargetDir)dll;%PATH%
  • lib

    • Project Properties - Linker - Input - Additional Dependencies
      • Add "DllEngine.lib"
    • Project Properties - Linker - General - Additional Library Directories
      • Add directory path containing DllEngine.lib
      • ex) $(TargetDir)lib
  • header

    • Project Properties - C/C++ - General - Additional Include Directories
      • Add directory path containing header files
      • ex) $(TargetDir)include

3. Install Library

  • This SDK is implemented to operate on NVIDIA GPUs
  • NVIDIA-related installation method

    1. Graphics driver installation
      • Install the driver corresponding to your device from link

    2. Install CUDA
      • Install CUDA compatible with your device from link
      • You can check compatible versions at the provided link

    3. Install cuDNN
      • Install cuDNN compatible with your CUDA version from link
      • Extract the compressed file: ${ProgramFiles}/NVIDIA/CUDNN/v8.x
      • Add the lib path to the lib Dependency
      • Add the include path to the include Dependency

    4. Install zLib
      • Download the DLL for Windows 64-bit from link
      • Add zlibwapi.dll from dll_x64/zibwapi.dll in zlib123dllx64.zip to cudnn/bin

    5. Install TensorRT
      • Install TensorRT from link
      • Extract the compressed file: ${ProgramFiles}/NVIDIA/TensorRT-8.x.x.x
      • Add the lib path to the lib Dependency
      • Add the include path to the include Dependency
  • The list of libraries being used

    1. spdlog_native - 2021.7.30 ver
    2. nolhmann.json - 3.11.2 ver
    3. opencv - 4.2 ver

  • Recommended Installation Method:
    • Install libraries via NuGet Package Manager.
    • Tools - NuGet Package Manager - Manage NuGet Packages for Solution
      • Install spdlog_native(by yuxchen) version 2021.7.30
      • Install nolhmann.json 3.11.2
      • Install opencv4.2

4. Installation Check

  • Development environment setup is complete if there are no errors during the build

    #include "dllengine.h"
    
    #include <opencv2/opencv.hpp>
    #include <opencv2/core.hpp>
    #include <opencv2/imgcodecs.hpp>
    
    using namespace cv;
    
    const std::string IMG_FILE = "path/to/image.png";
    std::string ENGINE_FILE = "path/to/onnxfile.onnx";
    int GPU_ID = 0; 
    
    int main()
    {
        Inferencer* inferencer = get_inferencer(ENGINE_FILE, GPU_ID);
    
        Mat img_input = imread(IMG_FILE, IMREAD_GRAYSCALE);
        int input_height = img_input.rows;
        int input_width = img_input.cols;
        auto* output_buffer = new unsigned char[img_input.rows * img_input.cols * 1]; 
    
        if (inferencer != nullptr) 
        {
            if (do_inference(inferencer, img_input.data, output_buffer, img_input.cols, img_input.rows) != 0)
            {
                std::cout << "Error is occurred while inferencing.";
            }
            remove_inferencer(inferencer);
        }
    }