[Xenomai] i.MX6q memory write causes high latency

Philippe Gerum rpm at xenomai.org
Fri Jul 6 11:14:48 CEST 2018


On 07/06/2018 11:12 AM, Philippe Gerum wrote:
> On 07/06/2018 11:09 AM, Federico Sbalchiero wrote:
>> Il 05/07/2018 11:14, Philippe Gerum ha scritto:
>>> On 07/04/2018 07:06 PM, Federico Sbalchiero wrote:
>>>> Hi,
>>>> first I want to say thanks to everyone involved in Xenomai for their
>>>> job.
>>>>
>>>> I'm testing Xenomai 3.0.7 and ipipe-arm/4.14 on Freescale/NXP i.MX6q
>>>> sabresd board using Yocto. System boots fine and is stable, but latency
>>>> under load (xeno-test) is higher than in my reference system (Xenomai
>>>> 2.6.5 on Freescale kernel 3.10.17 + ipipe 3.10.18).
>>>> This is after disabling power management, frequency scaling, CMA,
>>>> graphics, tracing, debug.
>>>>
>>>> I have found that a simple non-realtime user space process writing a
>>>> buffer in memory (memwrite) is able to trigger such high latencies.
>>>> Latency worsen a lot running a copy of the process on each core.
>>>> There is a correlation between buffer size and cache size suggesting
>>>> an L2 cache issue, like the L2 write allocate discussed in the mailing
>>>> list, but I can confirm L2 WA is disabled (see log).
>>>>
>>>> I'm looking for comments or suggestions.
>>>>
>>> A basic dd if=/dev/zero of=/dev/null loop in the background is enough to
>>> raise the latency actually. Could you try the Xenomai 3 + 3.18 combo on
>>> your hw and let us know whether you see the same regression?
>>>
>>> TIA,
>>>
>>
>> in the same configuration (kernel 3.18.20-ipipe + xenomai 3.0.7)
>> dd if=/dev/zero of=/dev/null has almost no effect on latency.
>> I think all data write to a few small buffers, not stressing L2
>> cache.
>>
> 
> You need to set a large block size for dd.
> 
> 

bs=16M suffices to raise the worst-case quite significantly on i.MX6QP
(sabresd) here.

-- 
Philippe.



More information about the Xenomai mailing list