Infrastructure Pods and Resource Management
During the installation procedure you will be able to provide information on how to best "operationalize" your infrastructure. Through the configuration of --node-selector
, --toleration
and --operator-resources
you will be able to drive the operator Pods
scheduling and to be able to assign resources.
The usage of these advanced properties assumes you’re familiar with the Kubernetes Scheduling concepts and configurations.
The aforementioned flags setting will work both with OLM installation and regular installation. |
Scheduling
Node Selectors
The most basic operation we provide is to let you assign Camel K operator Pods
to a specific cluster Node
. The functionality is based on NodeSelector
Kubernetes feature. As an example, you can schedule Camel K infra Pods
to a specific Node
of your cluster. See how to configure according the installation methodology selected.
Tolerations
The --toleration
option will let you tolerate a Camel K infra Pod
to support any matching Taint
according the Taint
and Toleration
Kubernetes feature. As an example, let’s suppose we have a node tainted as "dedicated=camel-k:NoSchedule". In order to allow the infra Pods
to be scheduled on that Node
we can provide the following option during installation procedure. See how to configure according the installation methodology selected.
Resources
While installing the Camel K operator, you can also specify the resources requests and limits to assign to the operator Pod
with --operator-resources
option. The option will expect the configuration as required by Kubernetes Resource Management. See how to configure according the installation methodology selected.
if you specify a limit, but does not specify a request, Kubernetes automatically assigns a request that matches the limit. |
Default Operator Pod configuration
The main Camel K Operator Pod contributor resources consumption is likely to be the number of parallel builds that are performed in the operator Pod
. So the resource requirements should be defined accordingly. The following requirements are sensible defaults that should work in most cases:
resources:
requests:
memory: "2Gi"
cpu: "500m"
limits:
memory: "8Gi"
cpu: "2"
Note that if you plan to perform native builds, then the memory requirements may be increased significantly. Also, the CPU requirements are rather "soft", in the sense that it won’t break the operator, but it’ll perform slower in general.
Default Integration Pod configuration
The resource set on the container here is highly dependant on what your application is doing. You can control this behavior by setting opportunely the resources on the Integration via container trait.
Be aware that the default are actually the following:
resources:
requests:
memory: "256Mi"
cpu: "125m"
limits:
memory: "1Gi"
cpu: "500m"