run-local-server

Description

The environment specified by --image is built locally, and the locally existing model is executed as WebAPI.

Notes

In order to use the run-local-server command, docker must be installed.

Synopsis

$ abeja model run-local-server [--help]
Usage: abeja model run-local-server [OPTIONS]

  Local run commands

Options:
  -h, --handler TEXT              Model hanlder  [required]
  -i, --image TEXT                Base-image name  [required]
  -d, --device_type [x86_cpu|x86_gpu|jetson_tx2|raspberry3]
                                  Device type
  -e, --environment ENVIRONMENTSTRING
                                  Environment variables
  -p, --port PORTNUMBER           port number assigned to local server (
                                  arbitrary number in 1 - 65535 )
  --no-cache, --no_cache          Not use built cache
  --v1                            Specify if you use old custom runtime image
  --help                          Show this message and exit.

Options

-h, --handler

Specify the path of the function to be called. If --handler main: handler is specified,handler defined in the main.py file is called.

If the file to call is placed directly under the src directory, it is src.main: handler.

-i, --image

Specify the image to use. See here for images that can be specified.

-d, --device_type

Specify the device type. [x86_cpu, x86_gpu, jetson_tx2, raspberry3]

-e, --environment

Specify an environment variable. Registered environment variables can be referenced from the code. e.g.)IMAGE_WIDTH:100
For more information on user-specifiable environment variables, see here.

-p, --port

Specify the port number. Any integer value between 1 and 65535 can be specified.

--no-cache, --no_cache

Rebuild and recreate the image.

--v1

This option should be given when using 18.10 custom images.

Example

Launch local WebAPI

Premise:

  • Assume the following conditions.
$ cat main.py
def handler(iter, context):
    for data in iter:
        yield data

Command:

$ abeja model run-local-server -h main:handler -i abeja/all-cpu:18.10

Output:

[info] preparing image : abeja/all-cpu:18.10
[info] building image
Step 1/7 : FROM abeja/all-cpu:18.10

 ---> a8e1fd359712
Step 2/7 : ADD . /tmp/app

 ---> bccfa55096b5
Step 3/7 : WORKDIR /tmp/app

 ---> 781ceeb720b8
Step 4/7 : ENV SERVICE_TYPE HTTP

 ---> Running in fe0468ea22d9
 ---> f1c8bb5b505b
Step 5/7 : ENV HANDLER main:handler

 ---> Running in 3407d8c19b88
 ---> 971ce380a3a6
Step 6/7 : RUN if test -r requirements.txt; then pip install --no-cache-dir -r requirements.txt; fi

 ---> Running in ff413a5f3683
 ---> f1acfe271dfd
Step 7/7 : LABEL "abeja-platform-model-type"='inference' "abeja-platform-requirement-md5"=''

 ---> Running in 20afeb06abff
 ---> 67035f4c5d70
Successfully built 67035f4c5d70
Successfully tagged abeja/all-cpu/18.10/local-model:latest
[info] setting up local server
[info] waiting server running
{"log_id": "1ef02adf-c363-4e63-84e1-71f1e9e54bbc", "log_level": "INFO", "timestamp": "2018-07-12T08:00:30.785670+00:00", "source": "model:run.run.203", "requester_id": "-", "message": "start executing model. version:0.10.2", "exc_info": null}
{"log_id": "aa3ee889-ffe5-4c69-929f-769cc05e766d", "log_level": "INFO", "timestamp": "2018-07-12T08:00:30.786656+00:00", "source": "model:run.run.218", "requester_id": "-", "message": "start installing packages from requirements.txt", "exc_info": null}
{"log_id": "3c80881d-da66-4d9c-a077-f3aa857c6208", "log_level": "INFO", "timestamp": "2018-07-12T08:00:30.789035+00:00", "source": "model:run.run.224", "requester_id": "-", "message": "requirements.txt not found, skipping", "exc_info": null}
 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
 ----- Local Server -----
 Started successfully!

 Endpoint : http://localhost:58670
 Handler :  main:handler
 Image :    abeja/all-cpu:18.10

 you can now access this http api!

 press Ctrl + C to stop
 ------------------------

Confirmation:

$ curl -H 'Content-Type: application/json' http://localhost:58670 -d '{"val": 12345}'
{"val": 12345}