Xenomai  3.0.8
Collaboration diagram for Big dual kernel lock:


#define cobalt_atomic_enter(__context)
 Enter atomic section (dual kernel only) More...
#define cobalt_atomic_leave(__context)
 Leave atomic section (dual kernel only) More...
#define RTDM_EXECUTE_ATOMICALLY(code_block)
 Execute code block atomically (DEPRECATED) More...

Detailed Description

Macro Definition Documentation

◆ cobalt_atomic_enter

#define cobalt_atomic_enter (   __context)
do { \
xnlock_get_irqsave(&nklock, (__context)); \
xnsched_lock(); \
} while (0)

Enter atomic section (dual kernel only)

This call opens a fully atomic section, serializing execution with respect to all interrupt handlers (including for real-time IRQs) and Xenomai threads running on all CPUs.

__contextname of local variable to store the context in. This variable updated by the real-time core will hold the information required to leave the atomic section properly.
Atomic sections may be nested. The caller is allowed to sleep on a blocking Xenomai service from primary mode within an atomic section delimited by cobalt_atomic_enter/cobalt_atomic_leave calls. On the contrary, sleeping on a regular Linux kernel service while holding such lock is NOT valid.
Since the strongest lock is acquired by this service, it can be used to synchronize real-time and non-real-time contexts.
This service is not portable to the Mercury core, and should be restricted to Cobalt-specific use cases, mainly for the purpose of porting existing dual-kernel drivers which still depend on the obsolete RTDM_EXECUTE_ATOMICALLY() construct.

◆ cobalt_atomic_leave

#define cobalt_atomic_leave (   __context)
do { \
xnsched_unlock(); \
xnlock_put_irqrestore(&nklock, (__context)); \
} while (0)

Leave atomic section (dual kernel only)

This call closes an atomic section previously opened by a call to cobalt_atomic_enter(), restoring the preemption and interrupt state which prevailed prior to entering the exited section.

__contextname of local variable which stored the context.
This service is not portable to the Mercury core, and should be restricted to Cobalt-specific use cases.


#define RTDM_EXECUTE_ATOMICALLY (   code_block)
{ \
code_block; \

Execute code block atomically (DEPRECATED)

Generally, it is illegal to suspend the current task by calling rtdm_task_sleep(), rtdm_event_wait(), etc. while holding a spinlock. In contrast, this macro allows to combine several operations including a potentially rescheduling call to an atomic code block with respect to other RTDM_EXECUTE_ATOMICALLY() blocks. The macro is a light-weight alternative for protecting code blocks via mutexes, and it can even be used to synchronise real-time and non-real-time contexts.

code_blockCommands to be executed atomically
It is not allowed to leave the code block explicitly by using break, return, goto, etc. This would leave the global lock held during the code block execution in an inconsistent state. Moreover, do not embed complex operations into the code bock. Consider that they will be executed under preemption lock with interrupts switched-off. Also note that invocation of rescheduling calls may break the atomicity until the task gains the CPU again.
Tags cobalt-core-tags "unrestricted"
This construct will be phased out in Xenomai 3.0. Please use rtdm_waitqueue services instead.
See also