Helm Chart Options

Helm chart for GoodData.CN

Note: The dependent subcharts (redis-ha, etcd, qdrant, and postgresql-ha) are included in the GoodData.CN chart.

Requirements

Repository Name Version
file://../gooddata-common gooddata-common 0.1.0
https://dandydeveloper.github.io/charts redis-ha 4.33.7
https://qdrant.github.io/qdrant-helm qdrant 1.14.0
oci://registry-1.docker.io/bitnamicharts etcd 11.3.6
oci://registry-1.docker.io/bitnamicharts postgresql-ha 9.4.9

Values

Key Type Default Description
afmExecApi.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
afmExecApi.image.name string "afm-exec-api"
afmExecApi.jvmOptions string "-XX:ReservedCodeCacheSize=100M -Xms320m -Xmx320m -XX:MaxMetaspaceSize=170M" Custom JVM options
afmExecApi.livenessProbe.initialDelaySeconds int 30
afmExecApi.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
afmExecApi.readinessProbe.initialDelaySeconds int 30
afmExecApi.resources object {"limits":{"cpu":"750m","memory":"965Mi"},"requests":{"cpu":"100m","memory":"600Mi"}} container resources
afmExecApi.resultIdHmac object {"historyLength":10,"rotationPeriod":86400} Configuration related to the result id HMAC signing
afmExecApi.resultIdHmac.historyLength int 10 How many HMAC keys are valid at the same time
afmExecApi.resultIdHmac.rotationPeriod int 86400 Time interval in seconds after which the HMAC key is rotated
afmExecApi.startupProbe.initialDelaySeconds int 30
analyticalDesigner.extraEnvVars list [{"name":"GDC_FEATURES_VALUES_ENABLE_EXECUTION_CANCELLING","value":"true"}] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
analyticalDesigner.image.name string "analytical-designer"
analyticalDesigner.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
analyticalDesigner.resources object {"limits":{"cpu":"100m","memory":"35Mi"},"requests":{"cpu":"10m","memory":"15Mi"}} container resources
apiDocs.enabled bool true Can be optionally disabled by setting enabled: false
apiDocs.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
apiDocs.image.name string "apidocs"
apiDocs.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
apiDocs.resources object {"limits":{"cpu":"100m","memory":"35Mi"},"requests":{"cpu":"10m","memory":"15Mi"}} container resources
apiGateway.extraEnvVars list [{"name":"GDC_FEATURES_VALUES_ENABLE_EXECUTION_CANCELLING","value":"true"}] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
apiGateway.image.name string "api-gateway"
apiGateway.jvmOptions string "-XX:ReservedCodeCacheSize=60M -Xms140m -Xmx140m -XX:MaxMetaspaceSize=100M" Custom JVM options
apiGateway.livenessProbe.initialDelaySeconds int 30
apiGateway.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
apiGateway.readinessProbe.initialDelaySeconds int 30
apiGateway.resources object {"limits":{"cpu":"500m","memory":"540Mi"},"requests":{"cpu":"100m","memory":"300Mi"}} container resources
apiGateway.startupProbe.initialDelaySeconds int 30
apiGw.cache.organization.expireAfterWriteSeconds int 60 Time in seconds after write when organization configuration cache entry expires
apiGw.cache.organization.size string "1000" Maximum number of cached organization configuration entries (0 disables cache)
apiGw.callForwarder.timeouts.connect string "1m" Connection timeout to backend services (ISO-8601 duration or specific unit like 10s, 1h 30m)
apiGw.callForwarder.timeouts.request string "4m" Request timeout for backend calls
apiGw.callForwarder.timeouts.socket string "3m" Socket timeout for backend connections
apiGw.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
apiGw.gracefulShutdownTimeoutSeconds int 185 Graceful shutdown timeout (higher than backend timeouts to ensure active requests are completed)
apiGw.image.name string "gateway-api-gw"
apiGw.jvmOptions string "-Xms280m -Xmx800m -XX:ActiveProcessorCount=1" Custom JVM options (must align with resource.limits)
apiGw.livenessProbe.initialDelaySeconds int 30
apiGw.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
apiGw.podLabels object {} Custom labels to add to pods
apiGw.podMonitor.path string "/metrics" Metrics endpoint path
apiGw.podMonitor.port string "metrics" Port name for metrics scraping
apiGw.rateLimiter.enabled bool true Enable rate limiting functionality (see Configure Rate Limits)
apiGw.rateLimiter.files list [] List of rate limiter configuration files (see Configure Rate Limits for structure)
apiGw.readinessProbe.initialDelaySeconds int 30
apiGw.resources object {"limits":{"cpu":"1000m","memory":"1000Mi"},"requests":{"cpu":"280m","memory":"400Mi"}} container resources
apiGw.startupProbe.initialDelaySeconds int 30
authService.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
authService.image.name string "auth-service"
authService.jvmOptions string "-XX:ReservedCodeCacheSize=100M -Xms190m -Xmx190m -XX:MaxMetaspaceSize=150M" Custom JVM options
authService.livenessProbe.initialDelaySeconds int 30
authService.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
authService.readinessProbe.initialDelaySeconds int 30
authService.resources object {"limits":{"cpu":"500m","memory":"750Mi"},"requests":{"cpu":"100m","memory":"400Mi"}} container resources
authService.startupProbe.initialDelaySeconds int 30
automation.database.existingSecret string "" you can specify custom secret with automation database password
automation.database.existingSecretKey string "postgresql-password" you can specify custom secret key to get automation database password from
automation.database.name string "automation"
automation.database.password string ""
automation.database.user string "" if user is empty, default postgres user and password is used
automation.enableHikariMonitoring bool true Whether to enable HikariCP monitoring
automation.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
automation.image.name string "automation"
automation.jvmOptions string "-XX:ReservedCodeCacheSize=60M -Xms200m -Xmx1500m -XX:MaxMetaspaceSize=210M" Custom JVM options
automation.livenessProbe.initialDelaySeconds int 30
automation.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
automation.readinessProbe.initialDelaySeconds int 30
automation.resources object {"limits":{"cpu":"500m","memory":"2000Mi"},"requests":{"cpu":"100m","memory":"450Mi"}} container resources
automation.schedule.notificationsCleanup.cron string "0 0 0 * * ?" Cron expression for the scheduled job
automation.schedule.notificationsCleanup.retention string "30d" Retention period for notifications
automation.smtp.existingSecret string "" use existing Secret with keys “smtp_host”, “smtp_username”, and “smtp_password”
automation.smtp.host string "" SMTP host
automation.smtp.password string "" password for SMTP authentication
automation.smtp.username string "" username for SMTP authentication
automation.startupProbe.initialDelaySeconds int 30
calcique.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
calcique.image.name string "calcique"
calcique.jvmOptions string "-XX:ReservedCodeCacheSize=110M -Xms380m -Xmx380m -XX:MaxMetaspaceSize=170M" Custom JVM options
calcique.livenessProbe.initialDelaySeconds int 30
calcique.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
calcique.readinessProbe.initialDelaySeconds int 30
calcique.resources object {"limits":{"cpu":"500m","memory":"1024Mi"},"requests":{"cpu":"150m","memory":"500Mi"}} container resources
calcique.startupProbe.initialDelaySeconds int 30
dashboards.extraEnvVars list [{"name":"GDC_FEATURES_VALUES_ENABLE_EXECUTION_CANCELLING","value":"true"}] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
dashboards.image.name string "dashboards"
dashboards.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
dashboards.resources object {"limits":{"cpu":"100m","memory":"35Mi"},"requests":{"cpu":"10m","memory":"15Mi"}} container resources
deployDexIdP bool true If set to true, Dex Identity Provider will be installed and configured according to values in “dex:” key below. Follow the guidelines in https://artifacthub.io/packages/helm/dex/dex how to customize settings. Disabling this component will require every Organization to use an external Identity Provider.
deployExportBuilder bool true If set to true, this chart will install export-builder service used for slides exports
deployGateway bool false If set to true, this chart will install the gateway component
deployGenAIService bool false If set to true, this chart will install gen-ai service
deployPostgresHA bool true If set to true, this chart will install bitnami/postgresql-ha as a part of the deployment. Postgres will be used for hosting Metadata and application configuration databases. If false, your existing, external Postgresql-compatible server must be configured in section service.postgres below. This option is useful for hosting metadata database in AWS RDS, for example.
deployQdrant bool false If set to true, this chart will install Qdrant vector database
deployQuiverDatasource bool false If set to true, additional FlexQuery nodes will be deployed and used to provide additional datasource capabilities.
deployQuiverDatasourceFs bool false If set to true, FlexQuery nodes will be deployed with additional FS-based datasource capabilities. Implies deployQuiverDatasource: true.
deployRedisHA bool true If set to true, this chart will install stable/redis-ha as a part of the deployment. If false, your existing Redis-compatible server must be configured in section service.redis below.
dex.config.database.existingSecret string "" you can specify custom secret with dex database password; the key needs to be “postgresql-password”
dex.config.database.name string "dex"
dex.config.database.password string ""
dex.config.database.sslMode string "disable" possible values: disable, require, verify-ca, verify-full
dex.config.database.user string "" if user is empty, default postgres user and password is used
dex.config.enablePasswordDB bool true Map containing set of configured connectors. The key is the id of a connector, value is a map of connector’s parameters (excluding id and redirectURI that are added by template automatically).
dex.config.expiry.deviceRequests string "24h"
dex.config.expiry.idTokens string "24h"
dex.config.expiry.signingKeys string "48h"
dex.config.grpc object {}
dex.config.logger.format string "json"
dex.config.logger.level string "info"
dex.config.oauth2.alwaysShowLoginScreen bool false
dex.config.oauth2.responseTypes[0] string "code"
dex.config.oauth2.responseTypes[1] string "token"
dex.config.oauth2.responseTypes[2] string "id_token"
dex.config.oauth2.skipApprovalScreen bool true
dex.config.web object {}
dex.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
dex.image.name string "dex" set to repository in local registry for air-gapped installations
dex.ingress.annotations object {} Custom annotations that will be added to every Ingress object created by this chart, e.g. cert-manager.io/cluster-issuer: letsencrypt-auth-production or using namespace-specific Issuer: cert-manager.io/issuer: local-ca-issuer
dex.ingress.authHost string "localhost" hostname where the application will have its authentication Endpoint (Dex). It will be used for organizations without their own external Identity Provider.
dex.ingress.tls.authSecretName string "" If you have pre-existing secret with your own certificate and key, put its name here. Also, if you want cert-manager, set to some Secret name where TLS certificate and key will be stored. Note that dex.ingress.authHost is required when enabling TLS. If you’re deploying to AWS, you may prefer TLS termination on AWS ELB and keep this value empty.
dex.podAnnotations object {}
dex.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
dex.resources object {"limits":{"cpu":"100m","memory":"50Mi"},"requests":{"cpu":"30m","memory":"50Mi"}} container resources
dex.serviceMonitor.additionalLabels object {}
dex.serviceMonitor.enabled bool false
dex.serviceMonitor.interval string "20s"
dex.serviceMonitor.scrapeTimeout string "10s"
dex.uriPrefix string "/dex" base context path prefix used by Dex to serve its resources
enableAlerting bool true If set to true, alerting will be enabled
enableAthenaDataSource bool false If set to true, Amazon Athena data source will be enabled
enableAutomationFilterContext bool false If set to true, schedules and alert notifications will include a window with used filters
enableCaffeineRedisCache bool true If set to true, the local caffeine Calcique cache is enabled
enableCompositeGrain bool true If set to true, composite grain is enabled
enableDashboardTabularExport bool false If set to true, enables the dashboard tabular export feature
enableDataSourceBlending bool true If set to true, the data source blending is enabled Implies deployQuiverDatasource: true.
enableDefaultSmtp bool false If set to true, DEFAULT_SMTP notification channels will be enabled Disabled by default, because sender email and name is currently unchangeable.
enableExternalRecipients bool true If set to true, it will be possible to add external recipients to the list of recipients
enableFlexConnectDataSource bool false If set to true, the FlexConnect data source will be enabled.
enableGenAIChat bool false If set to true, GenAI chat will be enabled
enableImprovedHttpStatuses bool false If set to true, improved HTTP status codes will be returned for conflict scenarios (e.g., 409 instead of 400)
enableInPlatformNotifications bool true If set to true, IN_PLATFORM notifications channels will be enabled, as well as a secondary channel option
enableLineChartTrendThreshold bool true If set to true, support for styling of line chart via metric will be enabled, enabled by default
enableMariaDbDataSource bool true If set to true, MariaDB data source will be enabled
enableMotherDuckDataSource bool true If set to true, MotherDuck data source will be enabled
enableMultilingualAIAssistant bool false If set to true, GenAI chat will answer in same language as the question
enableMySqlDataSource bool true If set to true, MySQL data source will be enabled
enableNewScheduledExport bool false If set to true, scheduled PNG, RAW widget export will be enabled
enableOracleDataSource bool true If set to true, Oracle data source will be enabled
enablePdmRemoval bool true If set to true, PDM removal is enabled and metadata are migrated
enablePinotDataSource bool true If set to true, Pinot data source will be enabled
enableRawExports bool true If set to true, enables the Quiver Raw Exports data source Implies deployQuiverDatasource: true.
enableRichTextDescriptions bool true If set to true, the rich text descriptions feature will be enabled
enableScatterPlotClustering bool true If set to true, the scatter plot clustering feature will be enabled
enableScheduling bool true If set to true, scheduling will be enabled
enableSeamlessIdpSwitch bool false If set to true, seamless IdP switch will be enabled
enableSemanticSearch bool false If set to true, semantic search will be enabled
enableSingleStoreDataSource bool true If set to true, SingleStore data source will be enabled
enableSmartFunctions bool true If set to true, smart functions will be enabled
enableSmtp bool true If set to true, SMTP notification channels will be enabled
enableSnowflakeKeyPairAuthentication bool true If set to true, the snowflake key pair authentication feature will be enabled
enableSplitGenAIService bool true If set to true, MD sync and chatbot will be handled by separate services
enableStarrocksDataSource bool false If set to true, Starrocks data source will be enabled
enableUdfCountContext bool true If set to true, the UDF count context feature is enabled
enableUserManagement bool true If set to true, user management will be enabled
enableWorkspacesHierarchyView bool true If set to true, the workspaces hierarchy view feature will be enabled
etcd.auth.rbac.create bool false
etcd.autoCompactionMode string "periodic"
etcd.autoCompactionRetention string "5m"
etcd.extraEnvVars[0].name string "ETCD_SNAPSHOT_COUNT"
etcd.extraEnvVars[0].value string "5000"
etcd.metrics.enabled bool true enable etcd metrics
etcd.persistence.enabled bool true
etcd.replicaCount int 3
etcd.resources object {"limits":{"cpu":"300m","memory":"512Mi"},"requests":{"cpu":"100m","memory":"256Mi"}} container resources
exportBuilder.enableHikariMonitoring bool true Whether to enable HikariCP monitoring
exportBuilder.export.slides.evaluateTimeout string "60s"
exportBuilder.export.slides.screenshotsMaxSize string "40MB"
exportBuilder.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
exportBuilder.image.name string "export-builder"
exportBuilder.jvmOptions string "-XX:ReservedCodeCacheSize=60M -Xms200m -Xmx1000m -XX:MaxMetaspaceSize=210M" Custom JVM options
exportBuilder.livenessProbe.initialDelaySeconds int 30
exportBuilder.playwright.concurrency.maxInstances int 2
exportBuilder.playwright.tracing.enabled bool false Enable Playwright tracing
exportBuilder.playwright.tracing.outputDir string "/tmp/playwright-tracing"
exportBuilder.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
exportBuilder.readinessProbe.initialDelaySeconds int 30
exportBuilder.resources object {"limits":{"cpu":"2000m","memory":"1600Mi"},"requests":{"cpu":"200m","memory":"550Mi"}} container resources
exportBuilder.startupProbe.initialDelaySeconds int 30
exportController.existingSecret string "" you can specify existing secret with cloud credentials instead. For s3:// URLs, it mus contain keys access-key-id and secret-access-key
exportController.exportABSStorage.absAccountKey string "" ABS account access key of IAM account with access to storage container
exportController.exportABSStorage.absAccountName string "" ABS storage account name to store export files
exportController.exportABSStorage.absAuthority string "" ABS authority to use (e.g.: .blob.core.windows.net)
exportController.exportABSStorage.absClientId string "" ABS ClientID used to authenticate with user-assigned managed identity
exportController.exportABSStorage.absContainer string "" ABS container name to store export files
exportController.exportABSStorage.absContainerPrefix string "" Custom container prefix
exportController.exportFSStorage.fsRootDir string "/tmp/exports" local/mounted filesystem root dir to save exports under
exportController.exportS3Storage.endpointOverride string "" If non-empty, override S3 host with a connection string such as “localhost:3000”
exportController.exportS3Storage.s3Bucket string "" AWS bucket name to store export files
exportController.exportS3Storage.s3BucketPrefix string "" Custom bucket prefix
exportController.exportS3Storage.s3Region string "" AWS region, default “us-east-1”
exportController.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
exportController.fileStorageBaseUrl string "/tmp/exports" Base url for export file storage. Can be a local directory (mounted directory when fsExportStorage.storageClassName is set) or S3 bucket “s3://s3..amazonaws.com//”. If used with useNewExportFlow=true, fileStorageBaseUrl takes precedence to values defined within exportFSStorage/exportS3Storage (values are ignored).
exportController.fsExportStorage.pvcRequestStorageSize string "1Gi" Size of the export storage volume. It’s up to the sc provider to ensure the requested size and regular cleanup.
exportController.fsExportStorage.storageClassName string "" External storage class name providing ReadWriteMany accessMode
exportController.globalCspDirectives object {}
exportController.image.name string "export-controller"
exportController.jvmOptions string "-XX:ReservedCodeCacheSize=90M -Xms130m -Xmx130m -XX:MaxMetaspaceSize=192M" Custom JVM options
exportController.livenessProbe.initialDelaySeconds int 30
exportController.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
exportController.podLabels object {}
exportController.readinessProbe.initialDelaySeconds int 30
exportController.resources object {"limits":{"cpu":"500m","memory":"730Mi"},"requests":{"cpu":"100m","memory":"490Mi"}} container resources
exportController.s3.accessKey string "" AWS access key id of IAM account with access to S3 bucket
exportController.s3.secretKey string "" AWS secret access key of IAM account with access to S3 bucket
exportController.startupProbe.initialDelaySeconds int 30
exportController.storageType string ""
exportController.useNewExportFlow bool false For scheduled exports,the following settings is used to set TTL for pre-signed url from S3. expireInSeconds: 600
genAi.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
genAi.fastembed.embedding_workers int 4
genAi.fastembed.metadata_sync_workers int 2
genAi.fastembed.modelName string "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
genAi.fastembed.threads int 4
genAi.fastembed.vectorDim int 384
genAi.image.name string "gen-ai"
genAi.langfuse.enabled string "no"
genAi.langfuse.secret string nil
genAi.langfuse.tracingEnvironment string "development"
genAi.metricScoreBoostingMultiplier float 1.2
genAi.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
genAi.podMonitor.path string "/metrics"
genAi.podMonitor.port string "metrics"
genAi.reranker.enabled bool true
genAi.reranker.modelName string "cross-encoder/mmarco-mMiniLMv2-L12-H384-v1"
genAi.reranker.modelPath string "/app/sentence_transformers"
genAi.resources object {"limits":{"cpu":"2000m","memory":"2048Mi"},"requests":{"cpu":"1000m","memory":"1024Mi"}} container resources
genAi.sentenceTransformers.modelPath string "/app/sentence_transformers"
genAi.vectorStores.qdrant.grpc_port int 6334
genAi.vectorStores.qdrant.host string "qdrant-db"
genAi.vectorStores.qdrant.prefer_grpc bool true
genAi.vectorStores.qdrant.rest_port int 6333
genAi.vectorStores.stores string "qdrant"
genAiService.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
genAiService.fastembed.embedding_workers int 4
genAiService.fastembed.metadata_sync_workers int 2
genAiService.fastembed.modelName string "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
genAiService.fastembed.threads int 4
genAiService.fastembed.vectorDim int 384
genAiService.image.name string "gen-ai-service"
genAiService.langfuse.enabled string "no"
genAiService.langfuse.secret string nil
genAiService.langfuse.tracingEnvironment string "development"
genAiService.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
genAiService.podMonitor.path string "/metrics"
genAiService.podMonitor.port string "metrics"
genAiService.reranker.enabled bool true
genAiService.reranker.modelName string "cross-encoder/mmarco-mMiniLMv2-L12-H384-v1"
genAiService.reranker.modelPath string "/app/sentence_transformers"
genAiService.resources object {"limits":{"cpu":"2000m","memory":"2048Mi"},"requests":{"cpu":"1000m","memory":"1024Mi"}} container resources
genAiService.sentenceTransformers.modelPath string "/app/sentence_transformers"
genAiService.vectorStores.qdrant.grpc_port int 6334
genAiService.vectorStores.qdrant.host string "qdrant-db"
genAiService.vectorStores.qdrant.prefer_grpc bool true
genAiService.vectorStores.qdrant.rest_port int 6333
genAiService.vectorStores.stores string "qdrant"
global.imageRegistry string nil Set the following variable to your private docker registry if you want to deploy to air-gapped installations. This affects images needed to deploy postgresql-ha subchart.
homeUi.extraEnvVars list [{"name":"GDC_FEATURES_VALUES_ENABLE_EXECUTION_CANCELLING","value":"true"}] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
homeUi.extraVolumeMounts list [] Additional volumes to mount.
homeUi.extraVolumes list [] Additional volumes
homeUi.image.name string "home-ui"
homeUi.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
homeUi.resources object {"limits":{"cpu":"100m","memory":"35Mi"},"requests":{"cpu":"10m","memory":"15Mi"}} container resources
ingress.annotations object {"nginx.ingress.kubernetes.io/proxy-body-size":"20m"} Custom annotations that will be added to every Ingress object created by this chart
ingress.ingressClassName string "nginx" Class of the Ingress controller used for this deployment
ingress.lbProtocol string "https" This setting informs applications if the load balancer exposes the applications on HTTPS or plain unencrypted HTTP. For production workload, we strongly suggest using HTTPS. For local development purposes (e.g. in k3d cluster), HTTP is sufficient.
istio.enabled bool false enable Istio support
ldmModeler.extraEnvVars list [{"name":"GDC_FEATURES_VALUES_ENABLE_EXECUTION_CANCELLING","value":"true"}] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
ldmModeler.image.name string "ldm-modeler"
ldmModeler.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
ldmModeler.resources object {"limits":{"cpu":"100m","memory":"35Mi"},"requests":{"cpu":"10m","memory":"15Mi"}} container resources
license.existingSecret string "" Existing secret containing the license key in the license property. Overrides the license.key The secret must be precreated before helm chart installation, e.g. using command: kubectl -n <namespace> create secret generic gd-license --from-literal=license="<put-your-license-key-here>"
license.key string "<put-your-license-key-here>" set to license key provided by GoodData. Used only if license.existingSecret is not set.
measureEditor.extraEnvVars list [{"name":"GDC_FEATURES_VALUES_ENABLE_EXECUTION_CANCELLING","value":"true"}] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
measureEditor.image.name string "measure-editor"
measureEditor.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
measureEditor.resources object {"limits":{"cpu":"100m","memory":"35Mi"},"requests":{"cpu":"10m","memory":"15Mi"}} container resources
metadataApi.bootstrap.existingSecret string "" If set, existing secret containing user and password can be used instead of the two values above.
metadataApi.cacheStrategy string "" String in the format organization1:DURABLE;organization2:EPHEMERAL this allows setting particular organization’s cache strategy. Allowed values of the second part are DURABLE and EPHEMERAL
metadataApi.database.existingExporterSecret string "" you can specify custom secret with md database exporter password; the key needs to be “exporter-password”
metadataApi.database.existingSecret string "" you can specify custom secret with md database password
metadataApi.database.existingSecretKey string "postgresql-password" you can specify custom secret key to get md database password from
metadataApi.database.exporterPassword string "VerySecretPassword" MD access for export views (user ‘md_exporter’)
metadataApi.database.name string "md"
metadataApi.database.password string ""
metadataApi.database.user string "" if user is empty, default postgres user and password is used
metadataApi.encryptor.enabled bool true enable datasource credentials organization oidc secret encryption in database.
metadataApi.encryptor.existingSecret string "" optionally, pass “keySet” in Secret instead.
metadataApi.encryptor.keySet string "" keyset generated by tinkey tool, must be set if encryptor is enabled
metadataApi.extraCache string "" String in the format organization1:12345;organization2:54321 this allows setting particular organization’s extraCache budgets in bytes
metadataApi.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
metadataApi.globalCspDirectives object {} Additional directives for Content-Security policy. It is a map of keys (CSP directives) and their values. Refer to https://w3c.github.io/webappsec-csp/#csp-directives for list of available directives These directives are merged with a preloaded set of CSP directives essential for basic GoodData.CN operation
metadataApi.grpc object {}
metadataApi.image.name string "metadata-api"
metadataApi.jvmOptions string "-XX:ReservedCodeCacheSize=140M -Xms600m -Xmx600m -XX:MaxMetaspaceSize=210M" Custom JVM options
metadataApi.livenessProbe.initialDelaySeconds int 30
metadataApi.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
metadataApi.readinessProbe.initialDelaySeconds int 30
metadataApi.resources object {"limits":{"cpu":"1250m","memory":"1500Mi"},"requests":{"cpu":"100m","memory":"1000Mi"}} container resources
metadataApi.startupProbe.initialDelaySeconds int 30
metadataApi.txnBindToPrimary bool false If set to true, metadata-api sends selected select statements always to primary node via pgpool in PG-HA configuration
organizationController.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
organizationController.image.name string "organization-controller"
organizationController.kubeClientTimeout int 10
organizationController.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
organizationController.podMonitor.path string "/"
organizationController.podMonitor.port string "metrics"
organizationController.resources object {"limits":{"cpu":"100m","memory":"200Mi"},"requests":{"cpu":"10m","memory":"50Mi"}} container resources
postgresql-ha.metrics.enabled bool true
postgresql-ha.nameOverride string "db"
postgresql-ha.pgpool.adminPassword string "pgpooladmin"
postgresql-ha.pgpool.clientIdleLimit int 1860
postgresql-ha.pgpool.customUsers.passwords string "" Define comma separated passwords for above mentioned users - applicable when you define dedicated users for Dex, MetadataApi and SqlExecutor databases.
postgresql-ha.pgpool.customUsers.usernames string "" Define comma separated users - applicable when you define dedicated users for Dex, MetadataApi and SqlExecutor databases.
postgresql-ha.pgpool.customUsersSecret string "" You can provide secret with ‘usernames’ and ‘passowords’ in the same format as mentioned above.
postgresql-ha.pgpool.maxPool int 1
postgresql-ha.pgpool.numInitChildren int 70
postgresql-ha.pgpool.replicaCount int 2
postgresql-ha.postgresql.existingSecret string "" If set, existing secret containing password and repmgrPassword can be used. See more details in the postgresql chart mentioned above.
postgresql-ha.postgresql.password string "secret"
postgresql-ha.postgresql.repmgrPassword string "repmgrpassword"
postgresql-ha.postgresql.username string "postgres"
postgresql-ha.volumePermissions.enabled bool true
pulsarCleanup.enabled bool true Enables post-delete hook for cleanup of Pulsar topics, namespace and tenant
pulsarCleanup.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
pulsarJob.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
pulsarJob.namespacePerRelease bool true If false, uncomment and set the ’tenant’ and ’namespace’ below. If true, the name of Pulsar’s tenant and namespace will be generated from the k8s namespace and release name. Note that if you set fixed names, you must avoid conflicts among multiple instances of this chart sharing the same Pulsar cluster.
qdrant.config.storage.collection.replication_factor int 3
qdrant.config.storage.collection.write_consistency_factor int 2
qdrant.fullnameOverride string "qdrant-db"
qdrant.podDisruptionBudget.enabled bool true enable PDB
qdrant.replicaCount int 3
qdrant.resources object {"limits":{"cpu":"1000m","memory":"512Mi"},"requests":{"cpu":"500m","memory":"368Mi"}} container resources
quiver.absDatasourceFsStorage.absAccountKey string "" ABS secret access key of storage account
quiver.absDatasourceFsStorage.absAccountName string "" ABS storage account name to store export files
quiver.absDatasourceFsStorage.absAuthority string "" ABS authority to use for GDC ABS BlobStorage client (e.g.: .blob.core.windows.net)
quiver.absDatasourceFsStorage.absBlobStorageAuthority string "" ABS Blob authority override - defaults to “.blob.core.windows.net”
quiver.absDatasourceFsStorage.absBlobStorageScheme string "" ABS Blob Storage scheme override - defaults to “https”
quiver.absDatasourceFsStorage.absClientId string "" ABS ClientID used to authenticate with user-assigned managed identity
quiver.absDatasourceFsStorage.absContainer string "" ABS container name to store export files
quiver.absDatasourceFsStorage.absContainerPrefix string "" Custom container prefix
quiver.absDatasourceFsStorage.absDfsStorageAuthority string "" ABS DFS authority override - defaults to “.dfs.core.windows.net”
quiver.absDatasourceFsStorage.absDfsStorageScheme string "" ABS DFS scheme override - defaults to “https”
quiver.absDatasourceFsStorage.authType string "" Authentication type (“azure_tokens”
quiver.absDurableStorage.absAccountKey string "" ABS secret storage account key
quiver.absDurableStorage.absAccountName string "" ABS account name to store datasource files
quiver.absDurableStorage.absBlobStorageAuthority string "" ABS Blob Storage authority override - defaults to “.blob.core.windows.net”
quiver.absDurableStorage.absBlobStorageScheme string "" ABS Blob Storage scheme override - defaults to “https”
quiver.absDurableStorage.absClientId string "" ABS ClientID used to authenticate using User-assigned Managed Identity
quiver.absDurableStorage.absContainer string "" ABS container name to store datasource files
quiver.absDurableStorage.absContainerPrefix string "" Custom container prefix
quiver.absDurableStorage.absDfsStorageAuthority string "" ABS DFS authority override - defaults to “.dfs.core.windows.net”
quiver.absDurableStorage.absDfsStorageScheme string "" ABS DFS scheme override - defaults to “https”
quiver.absDurableStorage.authType string "" Authentication type (“azure_tokens”
quiver.absDurableStorage.durableABSWritesInProgress string "" Maximum number of write streams to ABS (empty = default)
quiver.advertiseFlightPort string "16001" Note that quiver is an internal name for the FlexQuery service. Port which quiver advertises to clients. Client will use this port when connecting.
quiver.cacheCountLimit int 5000
quiver.concurrentPutRequests string "" Override of maximum number of DoPut requests that can be processed concurrently (by default derived from CPU count)
quiver.datasourceFs.analysisSampleSize int 1048576 Maximal size of the sample used while analyzing a CSV file (in bytes).
quiver.datasourceFs.fullTypeDetection bool true If true, full type analysis will be run when analyzing CSV files.
quiver.datasourceFs.maxAnalysisRuns int 1 Maximal number of analysis runs we allow to run in parallel (per node).
quiver.datasourceFs.maxConnections int 8 Maximal number of open connections to the FS datasource
quiver.datasourceFs.maxFileColumns int 1000 Maximal number of columns we allow the users to use in a CSV file.
quiver.datasourceFs.maxFileSize int 20971520 Maximal size of a CSV file we allow the users to use (in bytes).
quiver.datasourceFs.maxFileSizeTotal int 209715200 Maximal size of all the CSV files we allow the users to use per organization (in bytes).
quiver.datasourceFs.poolReplicas int 2 Number of pool replicas of the datasource service
quiver.datasourceFs.storageType string "S3" Type of the storage where to store datasource files (“FS” or “S3”).
quiver.durableStorageType string "" Type of storage where to store durable caches ("" or “S3”, “FS”, or “ABS”). Any change of this value requires ETCD wipe to refresh configuration!
quiver.etcdRegistrationTtl string "" Seconds how long can a FlexQuery node go without refreshing its registration in etcd (empty = default 30)
quiver.extraEnvVarsCache list []
quiver.extraEnvVarsDatasource list []
quiver.extraEnvVarsXtab list [] Additional environment variables for xtab and cache deployments for example: extraEnvVarsXtab: [{"name":"QUIVER_FOO","value":"1"}]
quiver.flightDriver.maxSchemaMetadataBytes int 4096 Maximal size of flight schema metadata of the flights coming into the flight data source
quiver.fsDatasourceFsStorage.storageClassName string "" External storage class name providing ReadWriteMany accessMode
quiver.fsDatasourceFsStorage.storageSize string "1Gi" Size of the datasource storage volume
quiver.fsDurableStorage.storageClassName string "" External storage class name providing ReadWriteMany accessMode (maximum cache storage size taken from resultCache.totalCacheLimit)
quiver.glibcTunables string "glibc.malloc.trim_threshold=128:glibc.malloc.arena_max=2"
quiver.image.name string "quiver"
quiver.limitFlightCount int 50000
quiver.optimizations.etcd_disk_flight_catalog bool false
quiver.optimizations.etcd_init_page_size int 5000
quiver.podAnnotations object {}
quiver.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
quiver.podLabels object {}
quiver.podManagementPolicy string "Parallel"
quiver.podMonitor.path string "/"
quiver.podMonitor.port string "metrics"
quiver.putQueueSize string "" Size of DoPut queue, DoPut requests are parked here if max DoPut concurrency is reached (by default 256)
quiver.rawExportsFs.maxConnections int 8 Maximal number of open connections to the raw exports datasource
quiver.rawExportsFs.poolReplicas int 2 Number of pool replicas of the datasource service
quiver.rawExportsFs.prefix string "raw-exports" Prefix for the raw exports storage inside the raw exports storage
quiver.replicaCount object {} replica count overrides for FlexQuery compoments
quiver.resources.cache object {"limits":{"cpu":"300m","memory":"768Mi"},"requests":{"cpu":"100m","memory":"256Mi"}} container resources
quiver.resources.datasource object {"limits":{"cpu":"1500m","memory":"768Mi"},"requests":{"cpu":"400m","memory":"384Mi"}} container resources
quiver.resources.ml object {"limits":{"cpu":"500m","memory":"512Mi"},"requests":{"cpu":"200m","memory":"256Mi"}} container resources
quiver.resources.xtab object {"limits":{"cpu":"500m","memory":"512Mi"},"requests":{"cpu":"200m","memory":"256Mi"}} container resources
quiver.s3DatasourceFsStorage.authType string "" Authentication type (“aws_tokens”
quiver.s3DatasourceFsStorage.endpointOverride string "" If non-empty, override S3 host with a connection string such as “localhost:3000”
quiver.s3DatasourceFsStorage.s3AccessKey string "" AWS secret access key of IAM account with access to S3 bucket
quiver.s3DatasourceFsStorage.s3Bucket string "" AWS bucket name to store datasource files
quiver.s3DatasourceFsStorage.s3BucketPrefix string "" Custom bucket prefix
quiver.s3DatasourceFsStorage.s3Region string "" AWS region, default “us-east-1”
quiver.s3DatasourceFsStorage.s3SecretKey string "" AWS access key id of IAM account with access to S3 bucket
quiver.s3DatasourceFsStorage.scheme string "" S3 connection transport, default “https”
quiver.s3DurableStorage.authType string "" Authentication type (“aws_tokens”
quiver.s3DurableStorage.durableS3WritesInProgress string "" Maximum number of write streams to S3 (empty = default)
quiver.s3DurableStorage.endpointOverride string "" If non-empty, override S3 host with a connect string such as “localhost:3000”
quiver.s3DurableStorage.s3AccessKey string "" AWS secret access key of IAM account with access to S3 bucket
quiver.s3DurableStorage.s3Bucket string "" AWS bucket name to store caches
quiver.s3DurableStorage.s3BucketPrefix string "" Custom bucket prefix
quiver.s3DurableStorage.s3Region string "" AWS region, default “us-east-1”
quiver.s3DurableStorage.s3SecretKey string "" AWS access key id of IAM account with access to S3 bucket
quiver.s3DurableStorage.scheme string "" S3 connection transport, default “https”
quiver.serverCriticalRssGrace int 15 Grace period, in seconds, for which the server’s RSS usage may remain in critical state (90% memory used)
quiver.serverMallocTrimInterval int 5 Interval in seconds influencing how often does server call malloc_trim() which returns unused memory to system
quiver.service.liveness.initialDelaySeconds int 10
quiver.service.liveness.path string "/live"
quiver.service.liveness.periodSeconds int 15
quiver.service.liveness.port int 8877
quiver.service.liveness.timeoutSeconds int 4
quiver.service.mlServiceName string "dataframe-ml-svc"
quiver.service.readiness.initialDelaySeconds int 5
quiver.service.readiness.path string "/ready"
quiver.service.readiness.periodSeconds int 15
quiver.service.readiness.port int 8877
quiver.service.readiness.timeoutSeconds int 2
quiver.service.xtabServiceName string "dataframe-svc"
quiver.sslCertFile string ""
quiver.storage.cache.diskCachePath string "/quiver/cache/data"
quiver.storage.cache.diskCacheSize string "900Mi"
quiver.storage.cache.diskSize string "1Gi"
quiver.storage.serverWorkDir string "/quiver/server/data"
quiver.storage.serverWorkDirSize string "256Mi"
redis-ha.exporter.enabled bool true enable redis metrics exporter
redis-ha.exporter.image string "oliver006/redis_exporter" set to repository in local registry for air-gapped installations
redis-ha.image.repository string "redis" set to repository in local registry for air-gapped installations
redis-ha.redis.config.maxmemory string "100m" This value should be tuned according to the real load. It should be set to 75 - 80% of the total memory (resources.limits.memory).
redis-ha.redis.config.maxmemory-policy string "allkeys-lru"
resultCache.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
resultCache.image.name string "result-cache"
resultCache.jvmOptions string "-XX:ReservedCodeCacheSize=60M -Xms1100m -Xmx1100m -XX:MaxMetaspaceSize=180M" Custom JVM options
resultCache.livenessProbe.initialDelaySeconds int 30
resultCache.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
resultCache.podLabels object {}
resultCache.rawExecutionFlightCopyTimeoutMs int 60000 Timeout in milliseconds for the copying of existing execution flights to the raw execution storage.
resultCache.readinessProbe.initialDelaySeconds int 30
resultCache.resources object {"limits":{"cpu":"750m","memory":"1555Mi"},"requests":{"cpu":"100m","memory":"1330Mi"}} container resources
resultCache.startupProbe.initialDelaySeconds int 30
resultCache.totalCacheLimit int 34359738368
resultCache.workspaceBaselineCache int 0
scanModel.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
scanModel.image.name string "scan-model"
scanModel.jvmOptions string "-XX:ReservedCodeCacheSize=90M -Xms110m -Xmx110m -XX:MaxMetaspaceSize=130M" Custom JVM options
scanModel.livenessProbe.initialDelaySeconds int 30
scanModel.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
scanModel.readinessProbe.initialDelaySeconds int 30
scanModel.resources object {"limits":{"cpu":"500m","memory":"560Mi"},"requests":{"cpu":"100m","memory":"300Mi"}} container resources
scanModel.startupProbe.initialDelaySeconds int 30
service.etcd.headlessHosts list ["host1","host2"] If you have ETCD deployed externally, set headlessHosts to list of fully qualified names of ETCD headless hosts. Used only when useInternalQuiverEtcd: false otherwise let it empty.
service.etcd.port int 2379 If you have ETCD deployed externally, set port to a port of external ETCD. Used only when useInternalQuiverEtcd: false.
service.postgres.existingSecret string "" You can define your own existing secret here containing postgresql-password key with the actual password. Not applicable when deployPostgresHA: true.
service.postgres.host string "" Here you should define basic parameters for connecting to external, Postgresql-compatible DB engine (like RDS) where metadata and application configuration will be stored. Mandatory when you set deployPostgresHA: false above. When using built-in Postgresql HA chart, the configuration is retrieved automatically and these settings are not used.
service.postgres.password string "topsecret"
service.postgres.port int 5432
service.postgres.username string "postgres"
service.pulsar.brokerPort int 6650
service.pulsar.host string "pulsar-broker.pulsar" If you have Apache Pulsar deployed externally, set host to fully qualified name of the broker. For default setup, when Pulsar is deployed to Kubernetes cluster using Helm chart, using pattern -broker. should be OK and this value doesn’t need to be changed.
service.pulsar.wsPort int 8080
service.redis.client.socket.keepalive.count int 3 Time in seconds between subsequent keepalive packets
service.redis.client.socket.keepalive.enabled bool true Enables TCP keepalive for Redis client
service.redis.client.socket.keepalive.idle int 30 Time in seconds after which the first keepalive packet is sent
service.redis.client.socket.keepalive.interval int 10 Time in seconds after which the connection is considered dead
service.redis.client.socket.tcpUserTimeout int 60 TCP user timeout in seconds, recommended value is computed from keep alive: idle + (count * interval)
service.redis.clusterMode bool false When true, it will use Redis cluster protocol for communication. Useful for HA deployment.
service.redis.existingSecret string "" You can define your own existing secret here containing redis-password key with the actual password
service.redis.hosts list [] Used when using external Redis service (like Elasticache on AWS, Memorystore on GCP or so). Format is a list of hostnames where the Redis is running.
service.redis.password string "" Password for accessing Redis if the Redis authentication is turned on
service.redis.port int 6379
service.redis.useSSL bool false Use SSL for communication with Redis cache
sqlExecutor.enableHikariMonitoring bool true Whether to enable HikariCP monitoring
sqlExecutor.extraDriversInitContainer string "" see documentation how to mount the image with extra drivers to GoodData.CN
sqlExecutor.extraEnvVars list [{"name":"GDC_FEATURES_VALUES_ENABLE_NEW_EXECUTOR_FLOW","value":"true"}] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
sqlExecutor.gdstorage.maxConnectionsPerDataSource int 2 Maximal number of connections per data source running on the GDSTORAGE data source
sqlExecutor.image.name string "sql-executor"
sqlExecutor.jvmOptions string "-XX:ReservedCodeCacheSize=110M -Xms460m -Xmx460m -XX:MaxMetaspaceSize=256M -XX:ActiveProcessorCount=6" Custom JVM options
sqlExecutor.livenessProbe.initialDelaySeconds int 30
sqlExecutor.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
sqlExecutor.readinessProbe.initialDelaySeconds int 30
sqlExecutor.resources object {"limits":{"cpu":"500m","ephemeral-storage":"500Mi","memory":"1356Mi"},"requests":{"cpu":"100m","ephemeral-storage":"500Mi","memory":"550Mi"}} container resources
sqlExecutor.startupProbe.initialDelaySeconds int 30
tabularExporter.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
tabularExporter.image.name string "tabular-exporter"
tabularExporter.optimizations.disableXlsxStyling string "false"
tabularExporter.optimizations.glibcTunables string "glibc.malloc.trim_threshold=128:glibc.malloc.arena_max=2"
tabularExporter.optimizations.mallocTrimInterval int 30
tabularExporter.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
tabularExporter.podMonitor.path string "/"
tabularExporter.podMonitor.port string "metrics"
tabularExporter.resources object {"limits":{"cpu":"200m","memory":"250Mi"},"requests":{"cpu":"50m","memory":"150Mi"}} container resources
telemetryEnabled bool true If set to true, deployed services will report telemetry data to https://matomo.anywhere.gooddata.com/matomo.php
tools.extraEnvVars list [] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
tools.image.name string "tools"
tools.replicaCount int 1
tools.resources object {"limits":{"cpu":"200m","memory":"200Mi"},"requests":{"cpu":"10m","memory":"5Mi"}} container resources
ui object {}
useInternalQuiverEtcd bool true If set to true, this chart will install bitnami/etcd as a part of the deployment. If false, your existing external ETCD must be configured in section service.etcd below.
visualExporterProxy.image.name string "visual-exporter-proxy"
visualExporterProxy.permittedDestinations string "" Space-delimited list of RFC1918 IPs or CIDRs where visual exporter can connect For security reasons, exporter can’t connect to hosts belonging to the following ranges: 0.0.0.0/8, 10.0.0.0/8, 127.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16 If your organizations have IP address from these ranges, you need to add this IP(s) to make visual exports (PDF) working. Example: permittedDestinations: “10.1.2.3/32 172.16.0.0/24”
visualExporterProxy.resources object {"limits":{"cpu":"100m","memory":"215Mi"},"requests":{"cpu":"50m","memory":"215Mi"}} container resources
webComponents.extraEnvVars list [{"name":"GDC_FEATURES_VALUES_ENABLE_EXECUTION_CANCELLING","value":"true"}] Additional environment variables for example: extraEnvVars: [{"name":"SOME_VAR","value":"some value"}]
webComponents.image.name string "web-components"
webComponents.podDisruptionBudget object {"maxUnavailable":"","minAvailable":""} define PodDisruptionBudget
webComponents.resources object {"limits":{"cpu":"100m","memory":"35Mi"},"requests":{"cpu":"10m","memory":"15Mi"}} container resources