[Xenomai] i.MX6q memory write causes high latency

Philippe Gerum rpm at xenomai.org
Fri Jul 6 11:10:29 CEST 2018


On 07/06/2018 10:41 AM, Federico Sbalchiero wrote:
> Il 05/07/2018 11:14, Philippe Gerum ha scritto:
>> On 07/04/2018 07:06 PM, Federico Sbalchiero wrote:
>>> Hi,
>>> first I want to say thanks to everyone involved in Xenomai for their
>>> job.
>>>
>>> I'm testing Xenomai 3.0.7 and ipipe-arm/4.14 on Freescale/NXP i.MX6q
>>> sabresd board using Yocto. System boots fine and is stable, but latency
>>> under load (xeno-test) is higher than in my reference system (Xenomai
>>> 2.6.5 on Freescale kernel 3.10.17 + ipipe 3.10.18).
>>> This is after disabling power management, frequency scaling, CMA,
>>> graphics, tracing, debug.
>>>
>>> I have found that a simple non-realtime user space process writing a
>>> buffer in memory (memwrite) is able to trigger such high latencies.
>>> Latency worsen a lot running a copy of the process on each core.
>>> There is a correlation between buffer size and cache size suggesting
>>> an L2 cache issue, like the L2 write allocate discussed in the mailing
>>> list, but I can confirm L2 WA is disabled (see log).
>>>
>>> I'm looking for comments or suggestions.
>>>
>> A basic dd if=/dev/zero of=/dev/null loop in the background is enough to
>> raise the latency actually. Could you try the Xenomai 3 + 3.18 combo on
>> your hw and let us know whether you see the same regression?
>>
>> TIA,
>>
> 
> kernel 3.18.20-ipipe + xenomai 3.0.7
> 
> latency under load (four memwrite instances)
> RTT|  00:00:01  (periodic user-mode task, 1000 us period, priority 99)
> RTH|----lat min|----lat avg|----lat max|-overrun|---msw|---lat
> best|--lat worst
> RTD|     24.985|     41.374|     76.351|       0| 0|     24.985|     76.351
> RTD|     26.889|     41.203|     68.070|       0| 0|     24.985|     76.351
> RTD|     22.828|     41.376|     67.681|       0| 0|     22.828|     76.351
> RTD|     20.969|     41.043|     74.143|       0| 0|     20.969|     76.351
> RTD|     27.027|     41.441|     68.037|       0| 0|     20.969|     76.351
> RTD|     24.413|     41.585|     81.062|       0| 0|     20.969|     81.062
> RTD|     27.234|     41.168|     76.516|       0| 0|     20.969|     81.062
> RTD|     23.779|     41.141|     70.466|       0| 0|     20.969|     81.062
> RTD|     24.824|     41.273|     75.322|       0| 0|     20.969|     81.062
> RTD|     25.627|     41.195|     71.157|       0| 0|     20.969|     81.062
> RTD|     28.874|     41.089|     66.579|       0| 0|     20.969|     81.062
> RTD|     26.672|     41.638|     75.995|       0| 0|     20.969|     81.062
> RTD|     25.139|     41.040|     69.543|       0| 0|     20.969|     81.062
> RTD|     26.215|     41.099|     66.336|       0| 0|     20.969|     81.062
> RTD|     24.192|     41.117|     76.828|       0| 0|     20.969|     81.062
> RTD|     27.310|     41.942|     79.888|       0| 0|     20.969|     81.062
> RTD|     24.348|     40.955|     66.484|       0| 0|     20.969|     81.062
> RTD|     26.679|     41.260|     80.242|       0| 0|     20.969|     81.062
> RTD|     26.820|     41.251|     74.986|       0| 0|     20.969|     81.062
> RTD|     27.635|     41.301|     73.961|       0| 0|     20.969|     81.062
> RTD|     26.877|     41.305|     72.789|       0| 0|     20.969|     81.062
> 
> 

Ok, if all goes well, we should soon be able to see the worst-case
latency drop to ~65 us under high mm stress on i.MX6q over 4.14, the
fewer the cores, the better the results with the i.MX6 series.

-- 
Philippe.



More information about the Xenomai mailing list