Metrics & Profiling
To help the developer to determine the appropriate hardware requirements and to track down performance bottlenecks, HQS Tasks has some built-in features which expose some metrics and profiling data to the client.
In particular, the metrics are stored in the execution_meta property of task response object.
Note that the availability of metrics and profiling data depends on the environment on which the task is being executed.
Metrics
HQS Tasks tries to collect the following information for the task execution:
duration: The time the task executed (wall time)cpu_avg: The CPU usage (ratio of CPU time over wall time)memory_peak: The memory usage (peak memory allocation)
Note that these numbers are measured at the scope of the task's process or even the container, depending on the environment on which the task is being executed.
Profiling Data
Note: Currently, this is only available for fast-running tasks as a feature preview. In the future, it will be extended to allow the developer to profile sections of the task, as well as being made available to other execution environments.
The amount of time which was spent in the following phases of the task execution are measured and reported to the client:
service_receive_request: Receiving input data (Note: Only the backend-internal data transmission is measured. Will be improved in the future.)task_input_deserialization: Validating (deserializing) the input data from JSON to form the task function's argument valuestask_handler: Invoking the Python function which implements the task (i.e. the@task-decorated function)task_output_serialization: Serializing the returned value (output) into JSON