[ovs-build] |fail| pw1516459 [ovs-dev] [PATCH ovn v2] northd: Fix routing loop in LRs with one-to-many SNAT
0-day Robot
robot at bytheb.org
Mon Aug 16 18:58:33 UTC 2021
From: robot at bytheb.org
Test-Label: github-robot: ovn-kubernetes
Test-Status: fail
http://patchwork.ozlabs.org/api/patches/1516459/
_github build: failed_
Build URL: https://github.com/ovsrobot/ovn/actions/runs/1136201512
Build Logs:
-----------------------Summary of failed steps-----------------------
"e2e (shard-conformance, false, false, false, ipv4, IPv4, true, false)" failed at step Run Tests
"e2e (shard-conformance, false, false, false, ipv4, IPv4, true, false)" failed at step Generate Test Report
"e2e (shard-conformance, false, false, false, ipv6, IPv6, false, true)" failed at step Run Tests
"e2e (shard-conformance, false, false, false, ipv6, IPv6, false, true)" failed at step Generate Test Report
"e2e (shard-conformance, false, false, false, dualstack, Dualstack, true, true)" failed at step Run Tests
"e2e (shard-conformance, false, false, false, dualstack, Dualstack, true, true)" failed at step Generate Test Report
"e2e (control-plane, true, true, true, ipv4, IPv4, true, false)" failed at step Run Tests
"e2e (control-plane, true, true, true, ipv4, IPv4, true, false)" failed at step Generate Test Report
----------------------End summary of failed steps--------------------
-------------------------------BEGIN LOGS----------------------------
####################################################################################
#### [Begin job log] "e2e (shard-conformance, false, false, false, ipv4, IPv4, true, false)" at step Run Tests
####################################################################################
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[1m[91mRan 207 of 5668 Specs in 1667.371 seconds[0m
[1m[91mFAIL![0m -- [32m[1m201 Passed[0m | [91m[1m6 Failed[0m | [33m[1m3 Flaked[0m | [33m[1m0 Pending[0m | [36m[1m5461 Skipped[0m
Ginkgo ran 1 suite in 28m11.086847068s
Test Suite Failed
make: Leaving directory '/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test'
##[error]Process completed with exit code 2.
####################################################################################
#### [End job log] "e2e (shard-conformance, false, false, false, ipv4, IPv4, true, false)" at step Run Tests
####################################################################################
####################################################################################
#### [Begin job log] "e2e (shard-conformance, false, false, false, ipv4, IPv4, true, false)" at step Generate Test Report
####################################################################################
Aug 16 17:34:24.367: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.0.218:9080/dial?request=hostName&protocol=http&host=10.96.10.27&port=80&tries=1'] Namespace:nettest-1731 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 16 17:34:24.367: INFO: >>> kubeConfig: /home/runner/admin.conf
Aug 16 17:34:24.949: INFO: Tries: 10, in try: 0, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-1731, hostIp: 172.18.0.3, podIp: 10.244.0.218, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:34:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:34:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:34:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:34:17 +0000 UTC }]" }
Aug 16 17:34:26.953: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.0.218:9080/dial?request=hostName&protocol=http&host=10.96.10.27&port=80&tries=1'] Namespace:nettest-1731 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 16 17:34:26.954: INFO: >>> kubeConfig: /home/runner/admin.conf
Aug 16 17:34:27.038: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-1731, hostIp: 172.18.0.3, podIp: 10.244.0.218, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:34:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:34:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:34:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:34:17 +0000 UTC }]" }
Aug 16 17:34:29.044: INF::set-output name=report-file::src/github.com/ovn-org/ovn-kubernetes/test/_artifacts//index.html
Inspecting 'src/github.com/ovn-org/ovn-kubernetes/test/_artifacts/'
<testsuite name="Kubernetes e2e suite" tests="9" failures="0" errors="0" time="1667.165">
<testsuite name="Kubernetes e2e suite" tests="9" failures="0" errors="0" time="1667.165">
<testsuite name="Kubernetes e2e suite" tests="9" failures="0" errors="0" time="1667.165">
<testsuite name="Kubernetes e2e suite" tests="5" failures="0" errors="0" time="972.226">
<testsuite name="Kubernetes e2e suite" tests="5" failures="0" errors="0" time="972.226">
<testsuite name="Kubernetes e2e suite" tests="5" failures="0" errors="0" time="972.226">
<testsuite name="Kubernetes e2e suite" tests="8" failures="0" errors="0" time="864.992">
<testsuite name="Kubernetes e2e suite" tests="8" failures="0" errors="0" time="864.992">
<testsuite name="Kubernetes e2e suite" tests="8" failures="0" errors="0" time="864.992">
<testsuite name="Kubernetes e2e suite" tests="10" failures="0" errors="0" time="1063.295">
<testsuite name="Kubernetes e2e suite" tests="10" failures="0" errors="0" time="1063.295">
<testsuite name="Kubernetes e2e suite" tests="10" failures="0" errors="0" time="1063.295">
<testsuite name="Kubernetes e2e suite" tests="15" failures="0" errors="0" time="1026.506">
<testsuite name="Kubernetes e2e suite" tests="15" failures="0" errors="0" time="1026.506">
<testsuite name="Kubernetes e2e suite" tests="15" failures="0" errors="0" time="1026.506">
<testsuite name="Kubernetes e2e suite" tests="2" failures="1" errors="0" time="1472.019">
Failure found in src/github.com/ovn-org/ovn-kubernetes/test/_artifacts//junit_06.xml: <testsuite name="Kubernetes e2e suite" tests="2" failures="1" errors="0" time="1472.019">
####################################################################################
#### [End job log] "e2e (shard-conformance, false, false, false, ipv4, IPv4, true, false)" at step Generate Test Report
####################################################################################
####################################################################################
#### [Begin job log] "e2e (shard-conformance, false, false, false, ipv6, IPv6, false, true)" at step Run Tests
####################################################################################
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[1m[91mRan 204 of 5668 Specs in 1423.681 seconds[0m
[1m[91mFAIL![0m -- [32m[1m198 Passed[0m | [91m[1m6 Failed[0m | [33m[1m3 Flaked[0m | [33m[1m0 Pending[0m | [36m[1m5464 Skipped[0m
Ginkgo ran 1 suite in 24m5.22745252s
Test Suite Failed
make: Leaving directory '/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test'
##[error]Process completed with exit code 2.
####################################################################################
#### [End job log] "e2e (shard-conformance, false, false, false, ipv6, IPv6, false, true)" at step Run Tests
####################################################################################
####################################################################################
#### [Begin job log] "e2e (shard-conformance, false, false, false, ipv6, IPv6, false, true)" at step Generate Test Report
####################################################################################
Aug 16 17:31:26.589: INFO: Waiting for amount of service:node-port-service endpoints to be 3
�[1mSTEP�[0m: Waiting for Session Affinity service to expose endpoint
Aug 16 17:31:27.621: INFO: Waiting for amount of service:session-affinity-service endpoints to be 3
�[1mSTEP�[0m: dialing(http) test-container-pod --> fd00:10:96::d51c:80
Aug 16 17:31:27.674: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://[fd00:10:244:2::dd]:9080/dial?request=hostName&protocol=http&host=fd00:10:96::d51c&port=80&tries=1'] Namespace:nettest-3565 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 16 17:31:27.674: INFO: >>> kubeConfig: /home/runner/admin.conf
Aug 16 17:31:28.592: INFO: Tries: 10, in try: 0, stdout: {"errors":["Get \"http://[fd00:10:96::d51c]:80/hostName\": dial tcp [fd00:10:96::d51c]:80: connect: connection refused"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-3565, hostIp: fc00:f853:ccd:e793::3, podIp: fd00:10:244:2::dd, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:31:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:31:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:31:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:31:10 +0000 UTC }]" }
Aug 16 17:31:30.600: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://[fd00:10:244:2::dd]:9080/dial?request=hostName&protocol=http&host=fd00:10:96::d51c&port=80&tries=1'] Namespace:nettest-3565 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Aug 16 17:31:30.600: INFO: >>> kubeConfig: /home/runner/admin.conf
Aug 16 17:31:31.744: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-2"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-3565, hostIp: fc00:f853:ccd:e793::3, podIp: fd00:10:244:2::dd, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-16 17:31:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 20::set-output name=report-file::src/github.com/ovn-org/ovn-kubernetes/test/_artifacts//index.html
Inspecting 'src/github.com/ovn-org/ovn-kubernetes/test/_artifacts/'
<testsuite name="Kubernetes e2e suite" tests="12" failures="0" errors="0" time="1423.394">
<testsuite name="Kubernetes e2e suite" tests="12" failures="0" errors="0" time="1423.394">
<testsuite name="Kubernetes e2e suite" tests="12" failures="0" errors="0" time="1423.394">
<testsuite name="Kubernetes e2e suite" tests="19" failures="0" errors="0" time="1031.409">
<testsuite name="Kubernetes e2e suite" tests="19" failures="0" errors="0" time="1031.409">
<testsuite name="Kubernetes e2e suite" tests="19" failures="0" errors="0" time="1031.409">
<testsuite name="Kubernetes e2e suite" tests="19" failures="0" errors="0" time="886.266">
<testsuite name="Kubernetes e2e suite" tests="19" failures="0" errors="0" time="886.266">
<testsuite name="Kubernetes e2e suite" tests="19" failures="0" errors="0" time="886.266">
<testsuite name="Kubernetes e2e suite" tests="5" failures="0" errors="0" time="875.456">
<testsuite name="Kubernetes e2e suite" tests="5" failures="0" errors="0" time="875.456">
<testsuite name="Kubernetes e2e suite" tests="5" failures="0" errors="0" time="875.456">
<testsuite name="Kubernetes e2e suite" tests="1" failures="1" errors="0" time="1423.511">
Failure found in src/github.com/ovn-org/ovn-kubernetes/test/_artifacts//junit_05.xml: <testsuite name="Kubernetes e2e suite" tests="1" failures="1" errors="0" time="1423.511">
####################################################################################
#### [End job log] "e2e (shard-conformance, false, false, false, ipv6, IPv6, false, true)" at step Generate Test Report
####################################################################################
####################################################################################
#### [Begin job log] "e2e (shard-conformance, false, false, false, dualstack, Dualstack, true, true)" at step Run Tests
####################################################################################
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113[0m
[91m[1m[Fail] [0m[90m[sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] [0m[0mGranular Checks: Services Secondary IP Family [0m[91m[1m[It] should function for client IP based session affinity: udp [LinuxOnly] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] [0m[0mGranular Checks: Services Secondary IP Family [0m[91m[1m[It] should function for client IP based session affinity: udp [LinuxOnly] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[91m[1m[Fail] [0m[90m[sig-network] Services [0m[91m[1m[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [0m
[37m/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:205[0m
[1m[91mRan 226 of 5668 Specs in 2255.296 seconds[0m
[1m[91mFAIL![0m -- [32m[1m217 Passed[0m | [91m[1m9 Failed[0m | [33m[1m7 Flaked[0m | [33m[1m0 Pending[0m | [36m[1m5442 Skipped[0m
Ginkgo ran 1 suite in 38m2.812134853s
Test Suite Failed
make: Leaving directory '/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test'
##[error]Process completed with exit code 2.
####################################################################################
#### [End job log] "e2e (shard-conformance, false, false, false, dualstack, Dualstack, true, true)" at step Run Tests
####################################################################################
####################################################################################
#### [Begin job log] "e2e (shard-conformance, false, false, false, dualstack, Dualstack, true, true)" at step Generate Test Report
####################################################################################
Aug 16 17:30:22.327: INFO: The status of Pod server-7w5nx is Running (Ready = false)
Aug 16 17:30:24.036: INFO: The status of Pod server-7w5nx is Running (Ready = true)
�[1mSTEP�[0m: Testing pods can connect to both ports when no policy is present.
�[1mSTEP�[0m: Creating client pod client-can-connect-80 that should successfully connect to svc-server.
Aug 16 17:30:24.222: INFO: Waiting for client-can-connect-80-9sjmn to complete.
Aug 16 17:30:38.373: INFO: Waiting for client-can-connect-80-9sjmn to complete.
Aug 16 17:30:38.373: INFO: Waiting up to 5m0s for pod "client-can-connect-80-9sjmn" in namespace "network-policy-1103" to be "Succeeded or Failed"
Aug 16 17:30:38.532: INFO: Pod "client-can-connect-80-9sjmn": Phase="Failed", Reason="", readiness=false. Elapsed: 159.003353ms
Aug 16 17:30:38.595: INFO: Running '/usr/local/bin/kubectl --server=https://10.1.0.93:11337 --kubeconfig=/home/runner/admin.conf --namespace=network-policy-1103 describe po client-can-connect-80-9sjmn'
Aug 16 17:30:39.168: INFO: stderr: ""
Aug 16 17:30:39.168: INFO: stdout: "Name: client-can-connect-80-9sjmn\nNamespace: network-policy-1103\nPriority: 0\nNode: ovn-worker2/172.18.0.2\nStart Time: Mon, 16 Aug 2021 17:30:24 +0000\nLabels: pod-name=client-can-connect-80\nAnnotations: k8s.ovn.org/pod-networks:\n {\"default\":{\"ip_addresses\":[\"10.244.0.124/24\",\"fd00:10:244:1::7c/64\"],\"mac_address\":\"0a:58:0a:f4:00:7c\",\"gateway_ips\":[\"10.244.0.1\",\"fd00:...\nStatus: Failed\nIP: 10.244.0.124\nIPs:\n IP: 10.244.0.124\n IP: fd00:10:244:1::7c\nContainers:\n client:\n Container ID: containerd://7a5a12c60e1b9cd841cc287a44ab9975abc51542db344d2b51413bc0eafbb4d4\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost at sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: <none>\n Host Port: <none>\n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 10.96.132.178:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Mon, 16 Aug 2021 17:30:31 +0000\n Finished: Mon, 16 Aug 2021 17:30:37 +0000\n Ready: False\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-wn6gz (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n default-token-wn6gz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-wn6gz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 15s default-scheduler Successfully assigned network-policy-1103/client-can-connect-80-9sjmn to ovn-worker2\n Normal Pulled 9s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 9s kubelet Created container client\n Normal Started 8s kubelet Started container client\n"
Aug 16 17:30:39.168: INFO:
Output of kubectl describe client-can-connect-80-9sjmn:
Name: client-can-connect-80-9sjmn
Namespace: network-policy-1103
Priority: 0
Node: ovn-worker2/172.18.0.2
Start Time: Mon, 16 Aug 2021 17:30:24 +0000
Labels: pod-name=client-can-connect-80
Annotations: k8s.ovn.org/pod-networks:
{"default":{"ip_addresses":["10.244.0.124/24","fd00:10:244:1::7c/64"],"mac_address":"0a:58:0a:f4:00:7c","gateway_ips":["10.244.0.1","fd00:...
Status: Faile::set-output name=report-file::src/github.com/ovn-org/ovn-kubernetes/test/_artifacts//index.html
Inspecting 'src/github.com/ovn-org/ovn-kubernetes/test/_artifacts/'
<testsuite name="Kubernetes e2e suite" tests="12" failures="1" errors="0" time="2254.004">
Failure found in src/github.com/ovn-org/ovn-kubernetes/test/_artifacts//junit_01.xml: <testsuite name="Kubernetes e2e suite" tests="12" failures="1" errors="0" time="2254.004">
####################################################################################
#### [End job log] "e2e (shard-conformance, false, false, false, dualstack, Dualstack, true, true)" at step Generate Test Report
####################################################################################
####################################################################################
#### [Begin job log] "e2e (control-plane, true, true, true, ipv4, IPv4, true, false)" at step Run Tests
####################################################################################
[91m[1m[Fail] [0m[90mhost to host-networked pods traffic validation [0m[0mValidating Host to Host Netwoked pods traffic [0m[91m[1m[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local [0m
[37m/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test/e2e/e2e.go:2278[0m
[91m[1m[Fail] [0m[90me2e ingress to host-networked pods traffic validation [0m[0mValidating ingress traffic to Host Netwoked pods [0m[91m[1m[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local [0m
[37m/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test/e2e/e2e.go:2193[0m
[91m[1m[Fail] [0m[90me2e ingress to host-networked pods traffic validation [0m[0mValidating ingress traffic to Host Netwoked pods [0m[91m[1m[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local [0m
[37m/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test/e2e/e2e.go:2154[0m
[91m[1m[Fail] [0m[90me2e ingress traffic validation [0m[0mValidating ingress traffic [0m[91m[1m[It] Should be allowed to node local cluster-networked endpoints by nodeport services with externalTrafficPolicy=local [0m
[37m/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test/e2e/e2e.go:1884[0m
[91m[1m[Fail] [0m[90me2e ingress traffic validation [0m[0mValidating ingress traffic [0m[91m[1m[It] Should be allowed to node local cluster-networked endpoints by nodeport services with externalTrafficPolicy=local [0m
[37m/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test/e2e/e2e.go:1884[0m
[1m[91mRan 47 of 60 Specs in 4931.728 seconds[0m
[1m[91mFAIL![0m -- [32m[1m44 Passed[0m | [91m[1m3 Failed[0m | [33m[1m0 Flaked[0m | [33m[1m0 Pending[0m | [36m[1m13 Skipped[0m
--- FAIL: TestE2e (4931.73s)
FAIL
FAIL github.com/ovn-org/ovn-kubernetes/test/e2e 4931.806s
FAIL
make: *** [Makefile:29: control-plane] Error 1
make: Leaving directory '/home/runner/work/ovn/ovn/src/github.com/ovn-org/ovn-kubernetes/test'
##[error]Process completed with exit code 2.
####################################################################################
#### [End job log] "e2e (control-plane, true, true, true, ipv4, IPv4, true, false)" at step Run Tests
####################################################################################
####################################################################################
#### [Begin job log] "e2e (control-plane, true, true, true, ipv4, IPv4, true, false)" at step Generate Test Report
####################################################################################
Aug 16 18:13:31.484: INFO: Container ovn-controller ready: true, restart count 0
Aug 16 18:13:31.484: INFO: Container ovnkube-node ready: true, restart count 1
Aug 16 18:13:31.484: INFO: Container ovs-metrics-exporter ready: true, restart count 0
Aug 16 18:13:31.484: INFO: kube-controller-manager-ovn-control-plane started at 2021-08-16 17:09:58 +0000 UTC (0+1 container statuses recorded)
Aug 16 18:13:31.484: INFO: Container kube-controller-manager ready: true, restart count 0
Aug 16 18:13:31.484: INFO: kube-apiserver-ovn-control-plane started at 2021-08-16 17:09:58 +0000 UTC (0+1 container statuses recorded)
Aug 16 18:13:31.484: INFO: Container kube-apiserver ready: true, restart count 0
Aug 16 18:13:31.484: INFO: ovnkube-db-2 started at 2021-08-16 17:18:38 +0000 UTC (0+3 container statuses recorded)
Aug 16 18:13:31.484: INFO: Container nb-ovsdb ready: true, restart count 0
Aug 16 18:13:31.484: INFO: Container ovn-dbchecker ready: true, restart count 0
Aug 16 18:13:31.484: INFO: Container sb-ovsdb ready: true, restart count 0
Aug 16 18:13:31.484: INFO: ovnkube-master-676d55b958-nxg2s started at 2021-08-16 17:11:44 +0000 UTC (0+3 container statuses recorded)
Aug 16 18:13:31.484: INFO: Container nbctl-daemon ready: true, restart count 0
Aug 16 18:13:31.484: INFO: Container ovn-northd ready: true, restart count 0
Aug 16 18:13:31.484: INFO: Container ovnkube-master ready: true, restart count 0
Aug 16 18:13:31.484: INFO: ovn-control-plane-hostnet-ep started at 2021-08-16 18:12:23 +0000 UTC (0+1 container statuses recorded)
Aug 16 18:13:31.484: INFO: Container ovn-control-plane-hostnet-ep-container ready: false, restart count 0
Aug 16 18:13:31.751: INFO:
Latency metrics for node ovn-control-plane
Aug 16 18:13:31.751: INFO:
Logging node info for node ovn-worker
Aug 16 18:13:31.756: INFO: Node Info: &Node{ObjectMeta:{ovn-worker 2003c73f-5cf9-4b4d-8e93-a7d319ea265e 10532 0 2021-08-16 17:10:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux k8s.ovn.org/ovnkube-db:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ovn-worker kubernetes.io/os:linux node-role.kubernetes.io/master:] map[k8s.ovn.org/host-addresses:["172.18.0.4","172.18.1.1","fc00:f853:ccd:e793::4"] k8s.ovn.org/hybrid-overlay-distributed-router-gateway-mac:0a:58:0a:f4:00:03 k8s.ovn.org/l3-gateway-config:{"default":{"mode":"shared","interface-id":"breth0_ovn-worker","mac-address":"02:42:ac:12:00:04","ip-addresses":["172.18.0.4/16"],"ip-address":"172.18.0.4/16","next-hops":["172.18.0.1"],"next-hop":"172.18.0.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:fa5d8d9a-6ead-4425-97aa-d14c259e664d k8s.ovn.org/node-mgmt-port-mac-address:d2:2b:5e:9c:16:f7 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"172.18.0.4/16"} k8s.ovn.org/node-subnets:{"default":"10.244.0.0/24"} k8s.ovn.org/topology-version:4 kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-08-16 17:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kubelet Update v1 2021-08-16 17:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubectl-label Update v1 2021-08-16 17:11:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:k8s.ovn.org/ovnkube-db":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-08-16 18:10:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {ovn-worker Update v1 2021-08-16 18:10:2::set-output name=report-file::src/github.com/ovn-org/ovn-kubernetes/test/_artifacts//index.html
Inspecting 'src/github.com/ovn-org/ovn-kubernetes/test/_artifacts/'
<testsuite name="E2e Suite" tests="47" failures="3" errors="0" time="4931.727">
Failure found in src/github.com/ovn-org/ovn-kubernetes/test/_artifacts//junit_control-plane-IPv4_01.xml: <testsuite name="E2e Suite" tests="47" failures="3" errors="0" time="4931.727">
####################################################################################
#### [End job log] "e2e (control-plane, true, true, true, ipv4, IPv4, true, false)" at step Generate Test Report
####################################################################################
--------------------------------END LOGS-----------------------------
More information about the build
mailing list