Kubernetes Pods Using Default Service Account with Excessive Permissions

High Risk Infrastructure Security
kubernetesrbacservice-accountsprivilege-escalationcluster-securitysecrets-accesspod-securityauthentication

What it is

A high-severity security vulnerability where Kubernetes pods use the default service account which often has overly broad permissions or access to sensitive cluster resources. Default service accounts are automatically assigned to pods unless explicitly overridden, and they frequently inherit cluster-wide permissions that violate the principle of least privilege. This can lead to privilege escalation, unauthorized access to secrets, and potential cluster compromise.

# VULNERABLE: Multiple applications using default service account

# Web application using default service account
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-frontend
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-frontend
  template:
    metadata:
      labels:
        app: web-frontend
    spec:
      # No serviceAccountName specified - uses 'default'
      containers:
      - name: frontend
        image: web-frontend:latest
        ports:
        - containerPort: 3000
        env:
        - name: API_TOKEN
          valueFrom:
            secretKeyRef:
              name: api-credentials  # Can access any secret!
              key: token
---
# API backend also using default service account
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-backend
  namespace: production
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api-backend
  template:
    metadata:
      labels:
        app: api-backend
    spec:
      # Also uses default service account
      containers:
      - name: backend
        image: api-backend:latest
        ports:
        - containerPort: 8080
        env:
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: database-secret  # Shared access to all secrets
              key: password
        - name: FRONTEND_CREDENTIALS
          valueFrom:
            secretKeyRef:
              name: api-credentials  # Cross-access to other app secrets
              key: frontend-key
---
# Database job using default service account
apiVersion: batch/v1
kind: CronJob
metadata:
  name: database-backup
  namespace: production
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          # CronJob also uses default SA
          containers:
          - name: backup
            image: backup-tool:latest
            command: ["/bin/bash"]
            args:
            - -c
            - |
              # Can access any secret due to broad default SA permissions
              DB_PASSWORD=$(kubectl get secret database-secret -o jsonpath='{.data.password}' | base64 -d)
              API_TOKEN=$(kubectl get secret api-credentials -o jsonpath='{.data.token}' | base64 -d)
              
              # Can create/delete resources
              kubectl create secret generic backup-temp --from-literal=data="sensitive"
              
              # Can access other applications' data
              kubectl exec deployment/web-frontend -- cat /app/config/secrets.json
          restartPolicy: OnFailure
---
# VULNERABLE: Overpermissive role for default service account
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: default-broad-permissions
rules:
# Broad access to all secrets
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
# Can modify all configmaps
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
# Can manage pods and services
- apiGroups: [""]
  resources: ["pods", "services", "endpoints"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
# Can exec into any pod
- apiGroups: [""]
  resources: ["pods/exec", "pods/log"]
  verbs: ["create", "get"]
# Can manage deployments
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: default-sa-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: default  # All pods inherit these broad permissions
  namespace: production
roleRef:
  kind: Role
  name: default-broad-permissions
  apiGroup: rbac.authorization.k8s.io
---
# VULNERABLE: Monitoring deployment accessing sensitive data
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-agent
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      app: monitoring-agent
  template:
    metadata:
      labels:
        app: monitoring-agent
    spec:
      # Uses default service account with excessive permissions
      containers:
      - name: agent
        image: monitoring-agent:latest
        env:
        - name: MONITOR_MODE
          value: "full-access"  # Can access all application data
        command: ["/bin/bash"]
        args:
        - -c
        - |
          # Monitor can access all secrets and application data
          while true; do
            echo "Collecting metrics from all applications..."
            
            # Access web frontend secrets
            kubectl get secret api-credentials -o yaml
            
            # Access database credentials  
            kubectl get secret database-secret -o yaml
            
            # Can exec into application pods
            kubectl exec deployment/web-frontend -- env | grep SECRET
            kubectl exec deployment/api-backend -- cat /app/database.conf
            
            sleep 60
          done
# SECURE: Each application has dedicated service account with minimal permissions

# SECURE: Web frontend with dedicated service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: web-frontend-sa
  namespace: production
  labels:
    app: web-frontend
    security-policy: minimal-permissions
automountServiceAccountToken: false  # No API access needed
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-frontend
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-frontend
  template:
    metadata:
      labels:
        app: web-frontend
    spec:
      serviceAccountName: web-frontend-sa  # Explicit service account
      automountServiceAccountToken: false  # No Kubernetes API access
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 1000
      containers:
      - name: frontend
        image: web-frontend:latest
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
          capabilities:
            drop: ["ALL"]
        ports:
        - containerPort: 3000
        env:
        - name: API_BASE_URL
          valueFrom:
            configMapKeyRef:
              name: frontend-config
              key: api-url
        # Only access to frontend-specific secrets
        - name: CDN_TOKEN
          valueFrom:
            secretKeyRef:
              name: frontend-secrets  # Only frontend secrets
              key: cdn-token
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: cache
          mountPath: /app/cache
        resources:
          limits:
            memory: "256Mi"
            cpu: "200m"
          requests:
            memory: "128Mi"
            cpu: "100m"
      volumes:
      - name: tmp
        emptyDir: {}
      - name: cache
        emptyDir: {}
---
# SECURE: API backend with dedicated service account and minimal RBAC
apiVersion: v1
kind: ServiceAccount
metadata:
  name: api-backend-sa
  namespace: production
  labels:
    app: api-backend
    security-policy: api-access-required
automountServiceAccountToken: true  # Needs limited API access
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: api-backend-role
rules:
# Only access to own secrets
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["api-backend-secrets", "database-connection"]
  verbs: ["get"]
# Read-only access to own configmap
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["api-backend-config"]
  verbs: ["get"]
# Access to own service for health checks
- apiGroups: [""]
  resources: ["endpoints"]
  resourceNames: ["api-backend-service"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: api-backend-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: api-backend-sa
  namespace: production
roleRef:
  kind: Role
  name: api-backend-role
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-backend
  namespace: production
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api-backend
  template:
    metadata:
      labels:
        app: api-backend
    spec:
      serviceAccountName: api-backend-sa  # Explicit service account
      automountServiceAccountToken: true  # Limited API access
      securityContext:
        runAsNonRoot: true
        runAsUser: 1001
        fsGroup: 1001
      containers:
      - name: backend
        image: api-backend:latest
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1001
          capabilities:
            drop: ["ALL"]
        ports:
        - containerPort: 8080
        env:
        # Only access to backend-specific secrets
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: database-connection  # Backend-specific DB secret
              key: password
        - name: JWT_SECRET
          valueFrom:
            secretKeyRef:
              name: api-backend-secrets  # Only backend secrets
              key: jwt-secret
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: app-data
          mountPath: /app/data
        resources:
          limits:
            memory: "512Mi"
            cpu: "300m"
          requests:
            memory: "256Mi"
            cpu: "150m"
      volumes:
      - name: tmp
        emptyDir: {}
      - name: app-data
        emptyDir: {}
---
# SECURE: Database backup with dedicated service account and specific permissions
apiVersion: v1
kind: ServiceAccount
metadata:
  name: database-backup-sa
  namespace: production
  labels:
    app: database-backup
    security-policy: backup-operations
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: database-backup-role
rules:
# Access only to database backup secrets
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["database-connection", "backup-storage-credentials"]
  verbs: ["get"]
# Can create temporary secrets for backup metadata
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  resourceNames: ["backup-*"]
# Read access to backup configmap
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["backup-config"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: database-backup-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: database-backup-sa
  namespace: production
roleRef:
  kind: Role
  name: database-backup-role
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: CronJob
metadata:
  name: database-backup
  namespace: production
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: database-backup-sa  # Dedicated backup SA
          securityContext:
            runAsNonRoot: true
            runAsUser: 1002
            fsGroup: 1002
          containers:
          - name: backup
            image: secure-backup-tool:latest
            securityContext:
              allowPrivilegeEscalation: false
              readOnlyRootFilesystem: true
              runAsNonRoot: true
              runAsUser: 1002
              capabilities:
                drop: ["ALL"]
            env:
            # Only access to backup-related secrets
            - name: DB_BACKUP_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: database-connection
                  key: backup-user-password
            - name: BACKUP_STORAGE_KEY
              valueFrom:
                secretKeyRef:
                  name: backup-storage-credentials
                  key: access-key
            volumeMounts:
            - name: tmp
              mountPath: /tmp
            - name: backup-workspace
              mountPath: /backup
            command: ["/usr/bin/secure-backup"]
            args:
            - --config=/backup/config.yaml
            - --output-dir=/backup/data
            - --no-exec-access  # Cannot exec into other pods
            resources:
              limits:
                memory: "1Gi"
                cpu: "500m"
              requests:
                memory: "512Mi"
                cpu: "250m"
          volumes:
          - name: tmp
            emptyDir: {}
          - name: backup-workspace
            emptyDir:
              sizeLimit: 5Gi
          restartPolicy: OnFailure
---
# SECURE: Monitoring with read-only service account and no secret access
apiVersion: v1
kind: ServiceAccount
metadata:
  name: monitoring-agent-sa
  namespace: production
  labels:
    app: monitoring-agent
    security-policy: read-only-metrics
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: monitoring-agent-role
rules:
# Read-only access to pods for metrics collection
- apiGroups: [""]
  resources: ["pods", "services", "endpoints"]
  verbs: ["get", "list"]
# No access to secrets or configmaps
# No exec or logs access
# Limited to metrics endpoints only
- apiGroups: [""]
  resources: ["pods/status"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: monitoring-agent-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: monitoring-agent-sa
  namespace: production
roleRef:
  kind: Role
  name: monitoring-agent-role
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-agent
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      app: monitoring-agent
  template:
    metadata:
      labels:
        app: monitoring-agent
    spec:
      serviceAccountName: monitoring-agent-sa  # Read-only monitoring SA
      securityContext:
        runAsNonRoot: true
        runAsUser: 1003
        fsGroup: 1003
      containers:
      - name: agent
        image: secure-monitoring-agent:latest
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1003
          capabilities:
            drop: ["ALL"]
        env:
        - name: MONITOR_MODE
          value: "metrics-only"  # Restricted monitoring mode
        - name: SCRAPE_INTERVAL
          value: "30s"
        ports:
        - containerPort: 8080
          name: metrics
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: metrics-storage
          mountPath: /metrics
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5
        resources:
          limits:
            memory: "128Mi"
            cpu: "100m"
          requests:
            memory: "64Mi"
            cpu: "50m"
      volumes:
      - name: tmp
        emptyDir: {}
      - name: metrics-storage
        emptyDir:
          sizeLimit: 1Gi
---
# SECURE: Network policy restricting service account token access
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-api-server-access
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  # Allow DNS
  - to: []
    ports:
    - protocol: UDP
      port: 53
  # Allow application traffic
  - to:
    - podSelector: {}
    ports:
    - protocol: TCP
      port: 8080
    - protocol: TCP
      port: 3000
  # Restrict API server access to specific service accounts only
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: TCP
      port: 443
    # Only pods with api-access-required label can access API server
  - from:
    - podSelector:
        matchLabels:
          security-policy: api-access-required

💡 Why This Fix Works

The vulnerable example shows multiple applications sharing the default service account with overly broad permissions, allowing cross-application access to secrets and resources. The secure version creates dedicated service accounts for each application with minimal, specific permissions, proper security contexts, and network policies to enforce boundaries.

Why it happens

When Kubernetes pods are deployed without specifying a service account, they automatically use the 'default' service account in their namespace. This default account often has more permissions than necessary, especially in environments where cluster administrators have granted broad RBAC permissions to default service accounts for convenience.

Root causes

Pods Deployed Without Explicit Service Account

When Kubernetes pods are deployed without specifying a service account, they automatically use the 'default' service account in their namespace. This default account often has more permissions than necessary, especially in environments where cluster administrators have granted broad RBAC permissions to default service accounts for convenience.

Preview example – YAML
# VULNERABLE: Pod using default service account implicitly
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-application
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      # No serviceAccountName specified - uses 'default'
      containers:
      - name: webapp
        image: webapp:latest
        ports:
        - containerPort: 8080
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: url
---
# VULNERABLE: Default service account with excessive permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: default-admin-binding
subjects:
- kind: ServiceAccount
  name: default
  namespace: production
roleRef:
  kind: ClusterRole
  name: cluster-admin  # Excessive permissions
  apiGroup: rbac.authorization.k8s.io

Default Service Account with Secret and ConfigMap Access

Default service accounts are often granted broad permissions to access secrets and configmaps across namespaces. This allows any pod using the default service account to potentially access sensitive data like database passwords, API keys, and configuration data that should be restricted to specific applications.

Preview example – YAML
# VULNERABLE: Overpermissive role binding for default service account
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: broad-access-role
rules:
- apiGroups: [""]
  resources: ["secrets", "configmaps"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
  resources: ["pods", "services"]
  verbs: ["get", "list", "create", "delete"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "create", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: default-broad-access
  namespace: production
subjects:
- kind: ServiceAccount
  name: default  # Default SA gets broad access
  namespace: production
roleRef:
  kind: Role
  name: broad-access-role
  apiGroup: rbac.authorization.k8s.io
---
# VULNERABLE: Jobs and CronJobs using default service account
apiVersion: batch/v1
kind: CronJob
metadata:
  name: data-sync
  namespace: production
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          # Uses default service account with broad permissions
          containers:
          - name: sync
            image: data-sync:latest
            command: ["/bin/sh"]
            args:
            - -c
            - |
              # Can access any secret in namespace
              kubectl get secrets --all-namespaces
              kubectl get secret db-password -o jsonpath='{.data.password}' | base64 -d
              
              # Can create/delete resources
              kubectl delete pod --all
          restartPolicy: OnFailure

Legacy Helm Charts and Operators Using Default Service Account

Many legacy Helm charts and Kubernetes operators are deployed without explicit service account configuration, defaulting to the default service account. These applications often require elevated permissions to function, leading to over-privileged default service accounts that affect all pods in the namespace.

Preview example – YAML
# VULNERABLE: Helm chart template without service account
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "myapp.fullname" . }}
  labels:
    {{- include "myapp.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "myapp.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "myapp.selectorLabels" . | nindent 8 }}
    spec:
      # No serviceAccountName - uses default
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP
        env:
        - name: KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        # Application code can access cluster API
        - name: KUBECONFIG
          value: "/var/run/secrets/kubernetes.io/serviceaccount"

# VULNERABLE: Operator with default service account
apiVersion: apps/v1
kind: Deployment
metadata:
  name: database-operator
  namespace: operator-system
spec:
  replicas: 1
  selector:
    matchLabels:
      name: database-operator
  template:
    metadata:
      labels:
        name: database-operator
    spec:
      # Operator using default service account
      containers:
      - name: manager
        image: database-operator:v1.0.0
        env:
        - name: WATCH_NAMESPACE
          value: ""
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: OPERATOR_NAME
          value: "database-operator"
---
# VULNERABLE: Default service account granted operator permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: database-operator-binding
subjects:
- kind: ServiceAccount
  name: default  # Using default SA for operator!
  namespace: operator-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Multi-tenant Environments with Shared Default Service Accounts

In multi-tenant Kubernetes environments, different teams or applications share namespaces and inadvertently share the same default service account. This creates security boundaries violations where one application can access resources intended for another application through the shared service account permissions.

Preview example – YAML
# VULNERABLE: Multiple applications sharing default service account
---
# Team A application
apiVersion: apps/v1
kind: Deployment
metadata:
  name: team-a-app
  namespace: shared-namespace
spec:
  replicas: 2
  selector:
    matchLabels:
      app: team-a-app
  template:
    metadata:
      labels:
        app: team-a-app
    spec:
      # Uses default service account (shared with other teams)
      containers:
      - name: app
        image: team-a/application:latest
        env:
        - name: SECRET_ACCESS
          value: "enabled"  # Can access team B secrets!
---
# Team B application
apiVersion: apps/v1
kind: Deployment
metadata:
  name: team-b-app
  namespace: shared-namespace
spec:
  replicas: 2
  selector:
    matchLabels:
      app: team-b-app
  template:
    metadata:
      labels:
        app: team-b-app
    spec:
      # Uses same default service account as team A
      containers:
      - name: app
        image: team-b/application:latest
        env:
        - name: DATABASE_ACCESS
          value: "enabled"  # Can access team A database!
---
# VULNERABLE: Shared permissions affect both teams
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: shared-namespace
  name: shared-default-role
rules:
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["team-a-secrets", "team-b-secrets"]
  verbs: ["get", "list"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "list", "create"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: shared-default-binding
  namespace: shared-namespace
subjects:
- kind: ServiceAccount
  name: default  # Both teams inherit these permissions
  namespace: shared-namespace
roleRef:
  kind: Role
  name: shared-default-role
  apiGroup: rbac.authorization.k8s.io

Fixes

1

Create Dedicated Service Accounts with Minimal Permissions

Create specific service accounts for each application or workload with only the minimum permissions required for their function. Use the principle of least privilege to grant access only to the specific resources and verbs needed by each application.

View implementation – YAML
# SECURE: Dedicated service account with minimal permissions
apiVersion: v1
kind: ServiceAccount
metadata:
  name: webapp-service-account
  namespace: production
  labels:
    app: web-application
    security-policy: restricted
automountServiceAccountToken: true
imagePullSecrets:
- name: webapp-registry-secret
---
# SECURE: Minimal role for web application
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: production
  name: webapp-minimal-role
rules:
# Only access to specific secrets needed by the application
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["webapp-database-secret", "webapp-cache-secret"]
  verbs: ["get"]
# Read-only access to specific configmaps
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["webapp-config", "webapp-features"]
  verbs: ["get"]
# Access to own pods for health checks
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]
  resourceNames: []
# Limited service discovery
- apiGroups: [""]
  resources: ["endpoints"]
  resourceNames: ["webapp-service"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: webapp-role-binding
  namespace: production
subjects:
- kind: ServiceAccount
  name: webapp-service-account
  namespace: production
roleRef:
  kind: Role
  name: webapp-minimal-role
  apiGroup: rbac.authorization.k8s.io
---
# SECURE: Deployment using dedicated service account
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-application
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-application
  template:
    metadata:
      labels:
        app: web-application
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
    spec:
      serviceAccountName: webapp-service-account  # Explicit SA
      automountServiceAccountToken: true
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      containers:
      - name: webapp
        image: webapp:latest
        imagePullPolicy: Always
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
          capabilities:
            drop: ["ALL"]
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: webapp-database-secret
              key: url
        - name: CACHE_URL
          valueFrom:
            secretKeyRef:
              name: webapp-cache-secret
              key: redis-url
        - name: APP_CONFIG
          valueFrom:
            configMapKeyRef:
              name: webapp-config
              key: config.yaml
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: cache
          mountPath: /app/cache
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5
        resources:
          limits:
            memory: "512Mi"
            cpu: "500m"
          requests:
            memory: "256Mi"
            cpu: "250m"
      volumes:
      - name: tmp
        emptyDir:
          sizeLimit: 1Gi
      - name: cache
        emptyDir:
          sizeLimit: 2Gi
      nodeSelector:
        kubernetes.io/os: linux
2

Disable Default Service Account Token Mounting

Disable automatic mounting of service account tokens for pods that don't need Kubernetes API access. This prevents unauthorized access to the cluster API even if the container is compromised. Use explicit token mounting only when necessary.

View implementation – YAML
# SECURE: Disable default service account token mounting
apiVersion: v1
kind: ServiceAccount
metadata:
  name: default
  namespace: production
automountServiceAccountToken: false  # Disable automatic token mounting
---
# SECURE: Deployment without service account token
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-app
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend-app
  template:
    metadata:
      labels:
        app: frontend-app
    spec:
      # Explicitly disable service account token mounting
      automountServiceAccountToken: false
      securityContext:
        runAsNonRoot: true
        runAsUser: 1001
        runAsGroup: 1001
        fsGroup: 1001
      containers:
      - name: frontend
        image: frontend-app:latest
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1001
          capabilities:
            drop: ["ALL"]
        ports:
        - containerPort: 3000
          name: http
        env:
        - name: NODE_ENV
          value: "production"
        - name: API_BASE_URL
          valueFrom:
            configMapKeyRef:
              name: frontend-config
              key: api-base-url
        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: nginx-cache
          mountPath: /var/cache/nginx
        - name: nginx-run
          mountPath: /var/run
        # No /var/run/secrets/kubernetes.io/serviceaccount volume
        resources:
          limits:
            memory: "256Mi"
            cpu: "200m"
          requests:
            memory: "128Mi"
            cpu: "100m"
      volumes:
      - name: tmp
        emptyDir: {}
      - name: nginx-cache
        emptyDir: {}
      - name: nginx-run
        emptyDir: {}
---
# SECURE: Service account with explicit token mounting for API access
apiVersion: v1
kind: ServiceAccount
metadata:
  name: api-client-sa
  namespace: production
automountServiceAccountToken: false  # Disabled by default
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-client
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api-client
  template:
    metadata:
      labels:
        app: api-client
    spec:
      serviceAccountName: api-client-sa
      # Explicitly enable only where needed
      automountServiceAccountToken: true
      containers:
      - name: client
        image: api-client:latest
        env:
        - name: KUBERNETES_TOKEN_PATH
          value: "/var/run/secrets/kubernetes.io/serviceaccount/token"
        volumeMounts:
        - name: kube-token
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          readOnly: true
      volumes:
      - name: kube-token
        projected:
          sources:
          - serviceAccountToken:
              path: token
              expirationSeconds: 3600  # 1 hour token expiration
              audience: api-client
          - configMap:
              name: kube-root-ca.crt
              items:
              - key: ca.crt
                path: ca.crt
          - downwardAPI:
              items:
              - path: namespace
                fieldRef:
                  fieldPath: metadata.namespace

# SECURE: Network policy to restrict API server access
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-api-access
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  # Allow DNS
  - to: []
    ports:
    - protocol: UDP
      port: 53
  # Deny direct API server access for most pods
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: TCP
      port: 443
    # Only allow specific pods with API access
  - from:
    - podSelector:
        matchLabels:
          needs-api-access: "true"
  # Allow normal application traffic
  - to:
    - podSelector: {}
    ports:
    - protocol: TCP
      port: 8080
    - protocol: TCP
      port: 3000
3

Implement Pod Security Standards and Admission Controllers

Use Kubernetes Pod Security Standards (PSS) and admission controllers like OPA Gatekeeper or Kyverno to enforce policies that prevent pods from using overprivileged default service accounts. Create policies that require explicit service account specification and validate RBAC permissions.

View implementation – YAML
# SECURE: Pod Security Standards enforcement
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted
    # Require explicit service account specification
    security-policy: strict-rbac
---
# SECURE: OPA Gatekeeper constraint to require explicit service accounts
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequireserviceaccount
spec:
  crd:
    spec:
      names:
        kind: K8sRequireServiceAccount
      validation:
        type: object
        properties:
          allowedServiceAccounts:
            type: array
            items:
              type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequireserviceaccount
        
        violation[{"msg": msg}] {
          # Check if service account is specified
          not input.review.object.spec.serviceAccountName
          msg := "Pod must specify explicit serviceAccountName (default service account not allowed)"
        }
        
        violation[{"msg": msg}] {
          # Check if using default service account
          input.review.object.spec.serviceAccountName == "default"
          msg := "Pod cannot use 'default' service account"
        }
        
        violation[{"msg": msg}] {
          # Check if service account is in allowed list (if specified)
          count(input.parameters.allowedServiceAccounts) > 0
          not input.review.object.spec.serviceAccountName in input.parameters.allowedServiceAccounts
          msg := sprintf("Service account '%v' is not in allowed list: %v", 
            [input.review.object.spec.serviceAccountName, input.parameters.allowedServiceAccounts])
        }
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequireServiceAccount
metadata:
  name: require-explicit-service-account
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
      - apiGroups: ["apps"]
        kinds: ["Deployment", "ReplicaSet", "DaemonSet", "StatefulSet"]
      - apiGroups: ["batch"]
        kinds: ["Job", "CronJob"]
    excludedNamespaces: ["kube-system", "kube-public"]
  parameters:
    allowedServiceAccounts:
    - "webapp-service-account"
    - "api-client-sa"
    - "monitoring-sa"
    - "backup-sa"
---
# SECURE: Kyverno policy for service account validation
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-service-account-rbac-validation
spec:
  validationFailureAction: enforce
  background: true
  rules:
  - name: require-explicit-service-account
    match:
      any:
      - resources:
          kinds:
          - Pod
          - Deployment
          - StatefulSet
          - DaemonSet
          - Job
          - CronJob
    validate:
      message: "Pod must specify explicit serviceAccountName (default not allowed)"
      deny:
        conditions:
          any:
          - key: "{{ request.object.spec.serviceAccountName || 'default' }}"
            operator: Equals
            value: "default"
          - key: "{{ request.object.spec.template.spec.serviceAccountName || 'default' }}"
            operator: Equals
            value: "default"
  - name: validate-service-account-permissions
    match:
      any:
      - resources:
          kinds:
          - Pod
    preconditions:
      all:
      - key: "{{ request.object.spec.serviceAccountName }}"
        operator: NotEquals
        value: ""
    validate:
      message: "Service account must have minimal required permissions only"
      deny:
        conditions:
          any:
          # Prevent cluster-admin bindings
          - key: "{{ serviceAccountName }}"
            operator: AnyIn
            value:
            - system:admin
            - cluster-admin
---
# SECURE: RBAC audit script
apiVersion: v1
kind: ConfigMap
metadata:
  name: rbac-audit-script
  namespace: security-tools
data:
  audit-rbac.sh: |
    #!/bin/bash
    set -euo pipefail
    
    echo "=== RBAC Security Audit ==="
    echo "Date: $(date)"
    echo
    
    # Check for pods using default service account
    echo "=== Pods using default service account ==="
    kubectl get pods --all-namespaces -o custom-columns="NAMESPACE:.metadata.namespace,NAME:.metadata.name,SA:.spec.serviceAccountName" | \
      grep -E "(default|<none>)" || echo "No pods using default service account found"
    echo
    
    # Check for overprivileged service accounts
    echo "=== Service accounts with cluster-admin access ==="
    kubectl get clusterrolebindings -o json | \
      jq -r '.items[] | select(.roleRef.name=="cluster-admin") | 
      "ClusterRoleBinding: " + .metadata.name + 
      " | Subject: " + (.subjects[]? | .kind + "/" + .name + " in " + (.namespace // "cluster"))'
    echo
    
    # Check default service account permissions
    echo "=== Default service account permissions ==="
    for ns in $(kubectl get namespaces -o jsonpath='{.items[*].metadata.name}'); do
      echo "Namespace: $ns"
      kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | \
        jq -r --arg ns "$ns" '.items[] | 
        select(.subjects[]? | .kind=="ServiceAccount" and .name=="default" and (.namespace // $ns)==$ns) | 
        "  Binding: " + .metadata.name + " | Role: " + .roleRef.name'
      echo
    done
    
    # Check for service accounts without explicit RBAC
    echo "=== Service accounts without explicit RBAC ==="
    kubectl get serviceaccounts --all-namespaces -o json | \
      jq -r '.items[] | 
      select(.metadata.name != "default") | 
      .metadata.namespace + "/" + .metadata.name' | \
    while read sa; do
      ns=$(echo $sa | cut -d'/' -f1)
      name=$(echo $sa | cut -d'/' -f2)
      
      # Check if SA has any role bindings
      bindings=$(kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | \
        jq -r --arg ns "$ns" --arg name "$name" 
        '.items[] | select(.subjects[]? | .kind=="ServiceAccount" and .name==$name and (.namespace // $ns)==$ns) | .metadata.name')
      
      if [ -z "$bindings" ]; then
        echo "  No RBAC found for: $sa"
      fi
    done
    
    echo "=== Audit completed ==="
---
apiVersion: batch/v1
kind: CronJob
metadata:
  name: rbac-security-audit
  namespace: security-tools
spec:
  schedule: "0 2 * * 1"  # Weekly on Monday at 2 AM
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: rbac-auditor-sa
          restartPolicy: OnFailure
          containers:
          - name: auditor
            image: kubectl:latest
            command: ["/bin/bash"]
            args: ["-c", "source /scripts/audit-rbac.sh"]
            volumeMounts:
            - name: audit-script
              mountPath: /scripts
          volumes:
          - name: audit-script
            configMap:
              name: rbac-audit-script
              defaultMode: 0755
4

Implement Service Account Lifecycle Management and Monitoring

Create automated processes for service account creation, permission review, and cleanup. Implement monitoring and alerting for service account usage patterns and permission escalations. Use tools like kubectl-who-can and rbac-lookup for regular audits.

View implementation – BASH
# SECURE: Service account lifecycle management
apiVersion: v1
kind: ConfigMap
metadata:
  name: sa-lifecycle-scripts
  namespace: platform-tools
data:
  create-service-account.sh: |
    #!/bin/bash
    # Service account creation with validation
    set -euo pipefail
    
    APP_NAME="$1"
    NAMESPACE="$2"
    PERMISSIONS_FILE="$3"
    
    # Validate inputs
    if [[ ! $APP_NAME =~ ^[a-z0-9-]+$ ]]; then
      echo "Error: APP_NAME must contain only lowercase letters, numbers, and hyphens"
      exit 1
    fi
    
    if [[ ! -f "$PERMISSIONS_FILE" ]]; then
      echo "Error: Permissions file not found: $PERMISSIONS_FILE"
      exit 1
    fi
    
    SA_NAME="${APP_NAME}-sa"
    
    echo "Creating service account: $SA_NAME in namespace: $NAMESPACE"
    
    # Create service account with metadata
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: $SA_NAME
      namespace: $NAMESPACE
      labels:
        app: $APP_NAME
        managed-by: platform-team
        security-review-required: "true"
      annotations:
        platform.company.com/created-by: "$(whoami)"
        platform.company.com/created-date: "$(date -Iseconds)"
        platform.company.com/permissions-file: "$PERMISSIONS_FILE"
        platform.company.com/review-date: "$(date -d '+90 days' -Iseconds)"
    automountServiceAccountToken: false
    EOF
    
    # Apply permissions from file
    echo "Applying permissions from: $PERMISSIONS_FILE"
    envsubst < "$PERMISSIONS_FILE" | kubectl apply -f -
    
    # Log creation for audit
    echo "Service account created successfully: $SA_NAME" | 
      logger -t "k8s-sa-lifecycle" -p local0.info
    
    echo "Created service account: $SA_NAME"
    echo "Remember to:"  
    echo "1. Update your deployment to use serviceAccountName: $SA_NAME"
    echo "2. Set automountServiceAccountToken: true if API access is needed"
    echo "3. Review permissions in 90 days"
  
  audit-service-accounts.sh: |
    #!/bin/bash
    # Comprehensive service account audit
    set -euo pipefail
    
    AUDIT_DATE=$(date -Iseconds)
    AUDIT_FILE="/tmp/sa-audit-${AUDIT_DATE}.json"
    
    echo "Starting service account audit: $AUDIT_DATE"
    
    # Collect service account data
    kubectl get serviceaccounts --all-namespaces -o json > "$AUDIT_FILE"
    
    # Analyze service accounts
    jq -r '.items[] | 
      select(.metadata.name != "default") | 
      {
        namespace: .metadata.namespace,
        name: .metadata.name,
        automount: .automountServiceAccountToken,
        age: .metadata.creationTimestamp,
        labels: .metadata.labels,
        annotations: .metadata.annotations
      }' "$AUDIT_FILE" | 
    while IFS= read -r sa_data; do
      namespace=$(echo "$sa_data" | jq -r '.namespace')
      name=$(echo "$sa_data" | jq -r '.name')
      
      echo "Analyzing: $namespace/$name"
      
      # Check for role bindings
      role_bindings=$(kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | 
        jq -r --arg ns "$namespace" --arg name "$name" 
        '.items[] | 
        select(.subjects[]? | .kind=="ServiceAccount" and .name==$name and (.namespace // $ns)==$ns) | 
        {binding: .metadata.name, role: .roleRef.name, kind: .roleRef.kind}')
      
      # Check for pods using this SA
      pod_count=$(kubectl get pods --all-namespaces -o json | 
        jq -r --arg ns "$namespace" --arg name "$name" 
        '[.items[] | select(.spec.serviceAccountName == $name and .metadata.namespace == $ns)] | length')
      
      # Generate security score
      security_score=100
      warnings=0
      
      # Check automount setting
      automount=$(echo "$sa_data" | jq -r '.automount // true')
      if [[ "$automount" == "true" ]]; then
        if [[ "$pod_count" -eq 0 ]]; then
          echo "  WARNING: automountServiceAccountToken=true but no pods use this SA"
          security_score=$((security_score - 20))
          warnings=$((warnings + 1))
        fi
      fi
      
      # Check for cluster-admin access
      if echo "$role_bindings" | jq -r '.role' | grep -q "cluster-admin"; then
        echo "  CRITICAL: Service account has cluster-admin access"
        security_score=$((security_score - 50))
        warnings=$((warnings + 1))
      fi
      
      # Check age and review date
      created_date=$(echo "$sa_data" | jq -r '.age')
      review_date=$(echo "$sa_data" | jq -r '.annotations["platform.company.com/review-date"] // empty')
      
      if [[ -n "$review_date" && "$review_date" < "$AUDIT_DATE" ]]; then
        echo "  WARNING: Service account needs security review (due: $review_date)"
        security_score=$((security_score - 10))
        warnings=$((warnings + 1))
      fi
      
      # Output audit result
      echo "  Security Score: $security_score/100 (Warnings: $warnings)"
      echo "  Pods using SA: $pod_count"
      echo "  RBAC bindings: $(echo "$role_bindings" | jq -s 'length')"
      echo
    done
    
    echo "Audit completed: $AUDIT_DATE"
  
  cleanup-unused-sas.sh: |
    #!/bin/bash
    # Cleanup unused service accounts
    set -euo pipefail
    
    DRY_RUN=${DRY_RUN:-true}
    MAX_AGE_DAYS=${MAX_AGE_DAYS:-30}
    
    echo "Service Account Cleanup (DRY_RUN=$DRY_RUN, MAX_AGE_DAYS=$MAX_AGE_DAYS)"
    
    # Find unused service accounts
    kubectl get serviceaccounts --all-namespaces -o json | 
      jq -r '.items[] | 
      select(.metadata.name != "default") | 
      {namespace: .metadata.namespace, name: .metadata.name, created: .metadata.creationTimestamp}' | 
    while IFS= read -r sa_data; do
      namespace=$(echo "$sa_data" | jq -r '.namespace')
      name=$(echo "$sa_data" | jq -r '.name')
      created=$(echo "$sa_data" | jq -r '.created')
      
      # Check if any pods use this service account
      pod_count=$(kubectl get pods -n "$namespace" -o json | 
        jq -r --arg name "$name" 
        '[.items[] | select(.spec.serviceAccountName == $name)] | length')
      
      if [[ "$pod_count" -eq 0 ]]; then
        # Check age
        created_timestamp=$(date -d "$created" +%s)
        current_timestamp=$(date +%s)
        age_days=$(( (current_timestamp - created_timestamp) / 86400 ))
        
        if [[ "$age_days" -gt "$MAX_AGE_DAYS" ]]; then
          echo "Unused service account: $namespace/$name (age: $age_days days)"
          
          if [[ "$DRY_RUN" == "false" ]]; then
            # Remove role bindings first
            kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | 
              jq -r --arg ns "$namespace" --arg name "$name" 
              '.items[] | 
              select(.subjects[]? | .kind=="ServiceAccount" and .name==$name and (.namespace // $ns)==$ns) | 
              .metadata.namespace + " " + .metadata.name + " " + .kind' | 
            while read -r bind_ns bind_name bind_kind; do
              echo "  Removing $bind_kind: $bind_ns/$bind_name"
              kubectl delete "$bind_kind" "$bind_name" -n "$bind_ns" || true
            done
            
            # Remove service account
            echo "  Deleting service account: $namespace/$name"
            kubectl delete serviceaccount "$name" -n "$namespace"
          fi
        fi
      fi
    done
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-lifecycle-manager
  namespace: platform-tools
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: sa-lifecycle-manager-role
rules:
- apiGroups: [""]
  resources: ["serviceaccounts", "pods"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: ["rbac.authorization.k8s.io"]
  resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: sa-lifecycle-manager-binding
subjects:
- kind: ServiceAccount
  name: sa-lifecycle-manager
  namespace: platform-tools
roleRef:
  kind: ClusterRole
  name: sa-lifecycle-manager-role
  apiGroup: rbac.authorization.k8s.io
---
# Monitoring and alerting for service account security
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: service-account-security
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: sa-security-exporter
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: service-account-alerts
  namespace: monitoring
spec:
  groups:
  - name: service-account.rules
    rules:
    - alert: DefaultServiceAccountInUse
      expr: kube_pod_spec_service_account{service_account="default"} > 0
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "Pod using default service account"
        description: "Pod {{ $labels.pod }} in namespace {{ $labels.namespace }} is using the default service account"
    
    - alert: OverprivilegedServiceAccount
      expr: |
        (
          sum by (service_account, namespace) (
            kube_rolebinding_info{role_kind="ClusterRole", role_name="cluster-admin"}
          ) > 0
        ) and on (service_account, namespace) (
          kube_pod_spec_service_account > 0
        )
      for: 10m
      labels:
        severity: critical
      annotations:
        summary: "Service account with cluster-admin privileges in use"
        description: "Service account {{ $labels.service_account }} in namespace {{ $labels.namespace }} has cluster-admin privileges and is being used by pods"
    
    - alert: UnusedServiceAccountWithPermissions
      expr: |
        (
          sum by (service_account, namespace) (kube_rolebinding_info) > 0
        ) unless on (service_account, namespace) (
          kube_pod_spec_service_account > 0
        )
      for: 1h
      labels:
        severity: info
      annotations:
        summary: "Unused service account with RBAC permissions"
        description: "Service account {{ $labels.service_account }} in namespace {{ $labels.namespace }} has RBAC permissions but is not used by any pods"

Detect This Vulnerability in Your Code

Sourcery automatically identifies kubernetes pods using default service account with excessive permissions and many other security issues in your codebase.