[ovs-dev] Building OVS on Ubuntu 12.04 stuck in atomic operation unit test

Jarno Rajahalme jrajahalme at nicira.com
Wed Jan 7 19:13:54 UTC 2015


On Jan 7, 2015, at 3:48 AM, Finucane, Stephen <stephen.finucane at intel.com> wrote:

>> We've heard similar reports before but it's challenging to find the
>> problem.  I can't reproduce the problem on my 32-bit Debian system by
>> just, for example, switching to GCC 4.6.
> 
> From [here]( http://openvswitch.org/pipermail/dev/2014-December/049833.html), you'll see I have a number of boards failing and one passing. I'll take config files from one of each (note - the kernels of the failing boards have been updated)
> 
>> What's in config.h and config.log?
> 
> The 'config.h' files are identical for both machines. The 'config.log' is slightly different, but this seems to be due to SSL support being available on one board but not the other. I've attached (as attachments due the size of these files - hope this is ok?) both version irregardless.
> 
>> How many cores does the system running the build have?
> 
> WORKING: 8 cores w/ 8 threads, dual socket (Intel(R) Xeon(R) CPU E5-2660 0).
> 
> BROKEN: 10 cores w/ 10 threads, dual socket (Intel(R) Xeon(R) CPU E5-2680 v2).
> 

We had an earlier experience of an apparent hang in this same test case when the build was run in a VM with only one core assigned to it. Given this, it would be interesting to know if the test/build system is running a hypervisor and how many cores are assigned to the VMs running the builds?

From this thread it seems that the working and broken systems are running the same software, and that the build target is x86_64, or is the hang happening only on 32-bit target?

Also, sorry if I’m a bit out of sync here, but do you have other configurations using GCC 4.6 working in the same hardware where this Ubuntu target is failing?

Regardless, applying this patch may help narrowing down the problem:

-------------- next part --------------


If this patch resolves the problem, I would be interested in getting a disassembled copy of the tests/test-atomic.o of a failing build (without the patch above).

Regards,

  Jarno

> Here's the lstopo output (from hwloc package) for both, in case it's useful:
> 
> WORKING:
> 
> $ lstopo-no-graphics
> No protocol specified
> Machine (31GB)
>  NUMANode L#0 (P#0 16GB)
>    Socket L#0 + L3 L#0 (20MB)
>      L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
>      L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1)
>      L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#2)
>      L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#3)
>      L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#4)
>      L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#5)
>      L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#6)
>      L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#7)
>    HostBridge L#0
>      ...
>  NUMANode L#1 (P#1 16GB)
>    Socket L#1 + L3 L#1 (20MB)
>      L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#8)
>      L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#9)
>      L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 (P#10)
>      L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 (P#11)
>      L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 (P#12)
>      L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 (P#13)
>      L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 (P#14)
>      L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 (P#15)
>    HostBridge L#7
>      ...
> 
> BROKEN:
> 
> $ lstopo-no-graphics
> Machine (63GB)
>  NUMANode L#0 (P#0 31GB)
>    Socket L#0 + L3 L#0 (25MB)
>      L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
>      L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1)
>      L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#2)
>      L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#3)
>      L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#4)
>      L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#5)
>      L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#6)
>      L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#7)
>      L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#8)
>      L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#9)
>    HostBridge L#0
>      ...
>  NUMANode L#1 (P#1 31GB)
>    Socket L#1 + L3 L#1 (25MB)
>      L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 (P#10)
>      L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 (P#11)
>      L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 (P#12)
>      L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 (P#13)
>      L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 (P#14)
>      L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 (P#15)
>      L2 L#16 (256KB) + L1d L#16 (32KB) + L1i L#16 (32KB) + Core L#16 + PU L#16 (P#16)
>      L2 L#17 (256KB) + L1d L#17 (32KB) + L1i L#17 (32KB) + Core L#17 + PU L#17 (P#17)
>      L2 L#18 (256KB) + L1d L#18 (32KB) + L1i L#18 (32KB) + Core L#18 + PU L#18 (P#18)
>      L2 L#19 (256KB) + L1d L#19 (32KB) + L1i L#19 (32KB) + Core L#19 + PU L#19 (P#19)
>    HostBridge L#7
>      ...
> 
> Some other failing boards, for comparison:
> 
> spec: 4 cores w/ 8 threads, single socket (Intel(R) Xeon(R) CPU E3-1285 v3)
> 
> $ lstopo-no-graphics
> Machine (31GB)
>  Socket L#0 + L3 L#0 (8192KB)
>    L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
>      PU L#0 (P#0)
>      PU L#1 (P#4)
>    L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
>      PU L#2 (P#1)
>      PU L#3 (P#5)
>    L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
>      PU L#4 (P#2)
>      PU L#5 (P#6)
>    L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
>      PU L#6 (P#3)
>      PU L#7 (P#7)
>  HostBridge L#0
>    ...
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev



More information about the dev mailing list