2016-07-19 18 views
0

Ich habe ein seltsames Problem mit K8s 1.3.2 auf GCE. Ich habe eine 100 GB Festplatte eingerichtet und eine gültige (und gebundene) PersistentVolume. Mein PersistentVolumeClaim wird jedoch mit einer Kapazität von 0 angezeigt, obwohl sein Status gebunden ist und der Pod, der versucht, es zu verwenden, in ContainerCreating festgefahren ist. HoffentlichPersistentVolumeMound Gebunden, aber mit 0 Kapazität mit Kubernetes auf Google Compute Engine

die Ausgaben von kubectl unten zusammenfassen das Problem:

$ gcloud compute disks list 
NAME         ZONE   SIZE_GB TYPE   STATUS 
disk100-001       europe-west1-d 100  pd-standard READY 
gke-unrest-micro-pool-199acc6c-3p31 europe-west1-d 100  pd-standard READY 
gke-unrest-micro-pool-199acc6c-4q55 europe-west1-d 100  pd-standard READY 

$ kubectl get pv 
NAME    CAPACITY ACCESSMODES STATUS CLAIM       REASON AGE 
pv-disk100-001 100Gi  RWO   Bound  default/graphite-statsd-claim    2m 

$ kubectl get pvc 
NAME     STATUS VOLUME   CAPACITY ACCESSMODES AGE 
graphite-statsd-claim Bound  pv-disk100-001 0      3m 

$ kubectl describe pvc 
Name:  graphite-statsd-claim 
Namespace: default 
Status:  Bound 
Volume:  pv-disk100-001 
Labels:  <none> 
Capacity: 0 
Access Modes: 

$ kubectl describe pv 
Name:  pv-disk100-001 
Labels:  <none> 
Status:  Bound 
Claim:  default/graphite-statsd-claim 
Reclaim Policy: Recycle 
Access Modes: RWO 
Capacity: 100Gi 
Message: 
Source: 
    Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine) 
    PDName: disk100-001 
    FSType: ext4 
    Partition: 0 
    ReadOnly: false 

# Events for pod that is supposed to mount this volume: 
Events: 
    FirstSeen LastSeen Count From      SubobjectPath Type  Reason  Message 
    --------- -------- ----- ----      ------------- -------- ------  ------- 
    6h  1m  183 {kubelet gke-unrest-micro-pool-199acc6c-4q55}   Warning  FailedMount Unable to mount volumes for pod "graphite-statsd-1873928417-i05ef_default(bf9fa0e5-4d8e-11e6-881c-42010af001fe)": timeout expired waiting for volumes to attach/mount for pod "graphite-statsd-1873928417-i05ef"/"default". list of unattached/unmounted volumes=[graphite-data] 
    6h  1m  183 {kubelet gke-unrest-micro-pool-199acc6c-4q55}   Warning  FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "graphite-statsd-1873928417-i05ef"/"default". list of unattached/unmounted volumes=[graphite-data] 

# Extract from deploy yaml file: 
--- 
apiVersion: v1 
kind: PersistentVolume 
metadata: 
    name: pv-disk100-001 
spec: 
    capacity: 
    storage: 100Gi 
    accessModes: 
    - ReadWriteOnce 
    persistentVolumeReclaimPolicy: Recycle 
    gcePersistentDisk: 
    pdName: disk100-001 
    fsType: ext4 
--- 
apiVersion: v1 
kind: PersistentVolumeClaim 
metadata: 
    name: graphite-statsd-claim 
spec: 
    accessModes: 
    - ReadWriteOnce 
    resources: 
    requests: 
     storage: 100Gi 
--- 

Jede Hilfe dankbar empfangen!

Antwort