编辑
2025-03-30
记录知识
0

RTEMS支持多种调度算法,默认情况下是基于优先级的调度器,为了了解这些调度器算法,本文列出RTEMS支持的调度器种类,方便后续逐一进行调度器的测试研究

一、简单优先级调度

Simple Priority Scheduler是基于优先级调度的一种简化算法,其初始化如下:

#define CONFIGURE_SCHEDULER_TABLE_ENTRIES \ RTEMS_SCHEDULER_TABLE_SIMPLE( dflt, CONFIGURE_SCHEDULER_NAME ) #define SCHEDULER_SIMPLE_ENTRY_POINTS \ { \ _Scheduler_simple_Initialize, /* initialize entry point */ \ _Scheduler_simple_Schedule, /* schedule entry point */ \ _Scheduler_simple_Yield, /* yield entry point */ \ _Scheduler_simple_Block, /* block entry point */ \ _Scheduler_simple_Unblock, /* unblock entry point */ \ _Scheduler_simple_Update_priority, /* update priority entry point */ \ _Scheduler_default_Map_priority, /* map priority entry point */ \ _Scheduler_default_Unmap_priority, /* unmap priority entry point */ \ SCHEDULER_DEFAULT_SMP_OPERATIONS \ _Scheduler_default_Node_initialize, /* node initialize entry point */ \ _Scheduler_default_Node_destroy, /* node destroy entry point */ \ _Scheduler_default_Release_job, /* new period of task */ \ _Scheduler_default_Cancel_job, /* cancel period of task */ \ _Scheduler_default_Start_idle /* start idle entry point */ \ SCHEDULER_DEFAULT_SET_AFFINITY_OPERATION \ }

二、优先级调度

Priority Scheduler是默认的基于优先级的调度器,其初始化如下

#define RTEMS_SCHEDULER_TABLE_PRIORITY( name, obj_name ) \ { \ &SCHEDULER_PRIORITY_CONTEXT_NAME( name ).Base.Base, \ SCHEDULER_PRIORITY_ENTRY_POINTS, \ RTEMS_ARRAY_SIZE( \ SCHEDULER_PRIORITY_CONTEXT_NAME( name ).Ready \ ) - 1, \ ( obj_name ) \ SCHEDULER_CONTROL_IS_NON_PREEMPT_MODE_SUPPORTED( true ) \ } #define SCHEDULER_PRIORITY_ENTRY_POINTS \ { \ _Scheduler_priority_Initialize, /* initialize entry point */ \ _Scheduler_priority_Schedule, /* schedule entry point */ \ _Scheduler_priority_Yield, /* yield entry point */ \ _Scheduler_priority_Block, /* block entry point */ \ _Scheduler_priority_Unblock, /* unblock entry point */ \ _Scheduler_priority_Update_priority, /* update priority entry point */ \ _Scheduler_default_Map_priority, /* map priority entry point */ \ _Scheduler_default_Unmap_priority, /* unmap priority entry point */ \ SCHEDULER_DEFAULT_SMP_OPERATIONS \ _Scheduler_priority_Node_initialize, /* node initialize entry point */ \ _Scheduler_default_Node_destroy, /* node destroy entry point */ \ _Scheduler_default_Release_job, /* new period of task */ \ _Scheduler_default_Cancel_job, /* cancel period of task */ \ _Scheduler_default_Start_idle /* start idle entry point */ \ SCHEDULER_DEFAULT_SET_AFFINITY_OPERATION \ }

三、最早截止时间优先调度

Earliest Deadline First调度按照任务的截止时间来确定任务优先级,其初始化如下

#define RTEMS_SCHEDULER_TABLE_EDF( name, obj_name ) \ { \ &SCHEDULER_EDF_CONTEXT_NAME( name ).Base, \ SCHEDULER_EDF_ENTRY_POINTS, \ SCHEDULER_EDF_MAXIMUM_PRIORITY, \ ( obj_name ) \ SCHEDULER_CONTROL_IS_NON_PREEMPT_MODE_SUPPORTED( true ) \ } #define SCHEDULER_EDF_ENTRY_POINTS \ { \ _Scheduler_EDF_Initialize, /* initialize entry point */ \ _Scheduler_EDF_Schedule, /* schedule entry point */ \ _Scheduler_EDF_Yield, /* yield entry point */ \ _Scheduler_EDF_Block, /* block entry point */ \ _Scheduler_EDF_Unblock, /* unblock entry point */ \ _Scheduler_EDF_Update_priority, /* update priority entry point */ \ _Scheduler_EDF_Map_priority, /* map priority entry point */ \ _Scheduler_EDF_Unmap_priority, /* unmap priority entry point */ \ SCHEDULER_DEFAULT_SMP_OPERATIONS \ _Scheduler_EDF_Node_initialize, /* node initialize entry point */ \ _Scheduler_default_Node_destroy, /* node destroy entry point */ \ _Scheduler_EDF_Release_job, /* new period of task */ \ _Scheduler_EDF_Cancel_job, /* cancel period of task */ \ _Scheduler_default_Start_idle /* start idle entry point */ \ SCHEDULER_DEFAULT_SET_AFFINITY_OPERATION \ }

四、恒定带宽调度

Constant Bandwidth Server调度是基于EDF的扩展,默认情况下,给任务分配固定的带宽(预算),然后再通过截止时间进行调度

#define RTEMS_SCHEDULER_TABLE_CBS( name, obj_name ) \ { \ &SCHEDULER_CBS_CONTEXT_NAME( name ).Base, \ SCHEDULER_CBS_ENTRY_POINTS, \ SCHEDULER_CBS_MAXIMUM_PRIORITY, \ ( obj_name ) \ SCHEDULER_CONTROL_IS_NON_PREEMPT_MODE_SUPPORTED( true ) \ } #define SCHEDULER_CBS_ENTRY_POINTS \ { \ _Scheduler_EDF_Initialize, /* initialize entry point */ \ _Scheduler_EDF_Schedule, /* schedule entry point */ \ _Scheduler_EDF_Yield, /* yield entry point */ \ _Scheduler_EDF_Block, /* block entry point */ \ _Scheduler_CBS_Unblock, /* unblock entry point */ \ _Scheduler_EDF_Update_priority, /* update priority entry point */ \ _Scheduler_EDF_Map_priority, /* map priority entry point */ \ _Scheduler_EDF_Unmap_priority, /* unmap priority entry point */ \ SCHEDULER_DEFAULT_SMP_OPERATIONS \ _Scheduler_CBS_Node_initialize, /* node initialize entry point */ \ _Scheduler_default_Node_destroy, /* node destroy entry point */ \ _Scheduler_CBS_Release_job, /* new period of task */ \ _Scheduler_CBS_Cancel_job, /* cancel period of task */ \ _Scheduler_default_Start_idle /* start idle entry point */ \ SCHEDULER_DEFAULT_SET_AFFINITY_OPERATION \ }

五、基于SMP的调度器扩展

为了支持SMP,简单优先级,优先级,EDF都做了SMP的扩展支持,如下

#define RTEMS_SCHEDULER_TABLE_SIMPLE_SMP( name, obj_name ) \ { \ &SCHEDULER_SIMPLE_SMP_CONTEXT_NAME( name ).Base.Base, \ SCHEDULER_SIMPLE_SMP_ENTRY_POINTS, \ SCHEDULER_SIMPLE_SMP_MAXIMUM_PRIORITY, \ ( obj_name ) \ SCHEDULER_CONTROL_IS_NON_PREEMPT_MODE_SUPPORTED( false ) \ } #define RTEMS_SCHEDULER_TABLE_PRIORITY_SMP( name, obj_name ) \ { \ &SCHEDULER_PRIORITY_SMP_CONTEXT_NAME( name ).Base.Base.Base, \ SCHEDULER_PRIORITY_SMP_ENTRY_POINTS, \ RTEMS_ARRAY_SIZE( \ SCHEDULER_PRIORITY_SMP_CONTEXT_NAME( name ).Ready \ ) - 1, \ ( obj_name ) \ SCHEDULER_CONTROL_IS_NON_PREEMPT_MODE_SUPPORTED( false ) \ } #define RTEMS_SCHEDULER_TABLE_EDF_SMP( name, obj_name ) \ { \ &SCHEDULER_EDF_SMP_CONTEXT_NAME( name ).Base.Base.Base, \ SCHEDULER_EDF_SMP_ENTRY_POINTS, \ SCHEDULER_EDF_MAXIMUM_PRIORITY, \ ( obj_name ) \ SCHEDULER_CONTROL_IS_NON_PREEMPT_MODE_SUPPORTED( false ) \ }

六、其他调度器

除了上述调度器,还有基于CPU亲和性优先级的SMP调度器,如RTEMS_SCHEDULER_TABLE_PRIORITY_AFFINITY_SMP,他会根据CPU的亲和性来调整基于优先级的调度器,可以确保某个任务只允许运行在某个CPU上

还有基于抢占式的SMP调度器,并支持CPU的亲和性设置

七、系统默认调度器

系统默认调度器是优先级调度,其定义如下:

#if !defined(CONFIGURE_SCHEDULER_CBS) \ && !defined(CONFIGURE_SCHEDULER_EDF) \ && !defined(CONFIGURE_SCHEDULER_EDF_SMP) \ && !defined(CONFIGURE_SCHEDULER_PRIORITY) \ && !defined(CONFIGURE_SCHEDULER_PRIORITY_AFFINITY_SMP) \ && !defined(CONFIGURE_SCHEDULER_PRIORITY_SMP) \ && !defined(CONFIGURE_SCHEDULER_SIMPLE) \ && !defined(CONFIGURE_SCHEDULER_SIMPLE_SMP) \ && !defined(CONFIGURE_SCHEDULER_STRONG_APA) \ && !defined(CONFIGURE_SCHEDULER_USER) #if defined(RTEMS_SMP) && _CONFIGURE_MAXIMUM_PROCESSORS > 1 #define CONFIGURE_SCHEDULER_EDF_SMP #else #define CONFIGURE_SCHEDULER_PRIORITY #endif #endif

其配置作用如下:

#ifdef CONFIGURE_SCHEDULER /* * Ignore these warnings: * * - invalid use of structure with flexible array member * * - struct has no members */ #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wpedantic" CONFIGURE_SCHEDULER; #pragma GCC diagnostic pop #endif const Scheduler_Control _Scheduler_Table[] = { CONFIGURE_SCHEDULER_TABLE_ENTRIES };

然后我们将宏定义展开,如下:

static struct { Scheduler_priority_Context Base; Chain_Control Ready[ ( 255 + 1 ) ]; } _Configuration_Scheduler_priority_dflt; const Scheduler_Control _Scheduler_Table[] = { { &_Configuration_Scheduler_priority_dflt.Base.Base, { _Scheduler_priority_Initialize, _Scheduler_priority_Schedule, _Scheduler_priority_Yield, _Scheduler_priority_Block, _Scheduler _priority_Unblock, _Scheduler_priority_Update_priority, _Scheduler_default_Map_priority, _Scheduler_default_Unmap_priority,

可以看到我们配置的默认调度器就是优先级调度,且有且就1个。还记得之前查看初始化代码的时候,我们在rtems_initialize_data_structures函数,会调用_Scheduler_Handler_initialization,其实现如下

void _Scheduler_Handler_initialization(void) { size_t n; size_t i; n = _Scheduler_Count; for ( i = 0 ; i < n ; ++i ) { const Scheduler_Control *scheduler; #if defined(RTEMS_SMP) Scheduler_Context *context; #endif scheduler = &_Scheduler_Table[ i ]; #if defined(RTEMS_SMP) context = _Scheduler_Get_context( scheduler ); #endif _ISR_lock_Initialize( &context->Lock, "Scheduler" ); ( *scheduler->Operations.initialize )( scheduler ); } }

这里会直接调用调度器的initialize函数。至此就开展调度器的初始化了

编辑
2025-03-30
记录知识
0

rtems作为操作系统,支持任务的调度,对应于程序而言,我们需要清楚的知道任务的调度场景,本文结合rtems的调度结构体,描述一下rtems支持的调度场景

一、RTEMS结构体

默认结构体如下:

typedef struct { /** @see _Scheduler_Handler_initialization() */ void ( *initialize )( const Scheduler_Control * ); /** @see _Scheduler_Schedule() */ void ( *schedule )( const Scheduler_Control *, Thread_Control *); /** @see _Scheduler_Yield() */ void ( *yield )( const Scheduler_Control *, Thread_Control *, Scheduler_Node * ); /** @see _Scheduler_Block() */ void ( *block )( const Scheduler_Control *, Thread_Control *, Scheduler_Node * ); /** @see _Scheduler_Unblock() */ void ( *unblock )( const Scheduler_Control *, Thread_Control *, Scheduler_Node * ); /** @see _Scheduler_Update_priority() */ void ( *update_priority )( const Scheduler_Control *, Thread_Control *, Scheduler_Node * ); /** @see _Scheduler_Map_priority() */ Priority_Control ( *map_priority )( const Scheduler_Control *, Priority_Control ); /** @see _Scheduler_Unmap_priority() */ Priority_Control ( *unmap_priority )( const Scheduler_Control *, Priority_Control ); #if defined(RTEMS_SMP) /** * @brief Ask for help operation. * * @param[in] scheduler The scheduler instance to ask for help. * @param[in] the_thread The thread needing help. * @param[in] node The scheduler node. * * @retval true Ask for help was successful. * @retval false Otherwise. */ bool ( *ask_for_help )( const Scheduler_Control *scheduler, Thread_Control *the_thread, Scheduler_Node *node ); /** * @brief Reconsider help operation. * * @param[in] scheduler The scheduler instance to reconsider the help * request. * @param[in] the_thread The thread reconsidering a help request. * @param[in] node The scheduler node. */ void ( *reconsider_help_request )( const Scheduler_Control *scheduler, Thread_Control *the_thread, Scheduler_Node *node ); /** * @brief Withdraw node operation. * * @param[in] scheduler The scheduler instance to withdraw the node. * @param[in] the_thread The thread using the node. * @param[in] node The scheduler node to withdraw. * @param[in] next_state The next thread scheduler state in case the node is * scheduled. */ void ( *withdraw_node )( const Scheduler_Control *scheduler, Thread_Control *the_thread, Scheduler_Node *node, Thread_Scheduler_state next_state ); /** * @brief Makes the node sticky. * * This operation is used by _Thread_Priority_update_and_make_sticky(). It * is only called for the scheduler node of the home scheduler. * * Uniprocessor schedulers schould provide * _Scheduler_default_Sticky_do_nothing() for this operation. * * SMP schedulers should provide this operation using * _Scheduler_SMP_Make_sticky(). * * The make and clean sticky operations are an optimization to simplify the * control flow in the update priority operation. The update priority * operation is used for all scheduler nodes and not just the scheduler node * of home schedulers. The update priority operation is a commonly used * operations together with block and unblock. The make and clean sticky * operations are used only in specific scenarios. * * @param scheduler is the scheduler of the node. * * @param[in, out] the_thread is the thread owning the node. * * @param[in, out] node is the scheduler node to make sticky. */ void ( *make_sticky )( const Scheduler_Control *scheduler, Thread_Control *the_thread, Scheduler_Node *node ); /** * @brief Cleans the sticky property from the node. * * This operation is used by _Thread_Priority_update_and_clean_sticky(). It * is only called for the scheduler node of the home scheduler. * * Uniprocessor schedulers schould provide * _Scheduler_default_Sticky_do_nothing() for this operation. * * SMP schedulers should provide this operation using * _Scheduler_SMP_Clean_sticky(). * * @param scheduler is the scheduler of the node. * * @param[in, out] the_thread is the thread owning the node. * * @param[in, out] node is the scheduler node to clean the sticky property. */ void ( *clean_sticky )( const Scheduler_Control *scheduler, Thread_Control *the_thread, Scheduler_Node *node ); /** * @brief Pin thread operation. * * @param[in] scheduler The scheduler instance of the specified processor. * @param[in] the_thread The thread to pin. * @param[in] node The scheduler node of the thread. * @param[in] cpu The processor to pin the thread. */ void ( *pin )( const Scheduler_Control *scheduler, Thread_Control *the_thread, Scheduler_Node *node, struct Per_CPU_Control *cpu ); /** * @brief Unpin thread operation. * * @param[in] scheduler The scheduler instance of the specified processor. * @param[in] the_thread The thread to unpin. * @param[in] node The scheduler node of the thread. * @param[in] cpu The processor to unpin the thread. */ void ( *unpin )( const Scheduler_Control *scheduler, Thread_Control *the_thread, Scheduler_Node *node, struct Per_CPU_Control *cpu ); /** * @brief Add processor operation. * * @param[in] scheduler The scheduler instance to add the processor. * @param[in] idle The idle thread of the processor to add. */ void ( *add_processor )( const Scheduler_Control *scheduler, Thread_Control *idle ); /** * @brief Remove processor operation. * * @param[in] scheduler The scheduler instance to remove the processor. * @param[in] cpu The processor to remove. * * @return The idle thread of the removed processor. */ Thread_Control *( *remove_processor )( const Scheduler_Control *scheduler, struct Per_CPU_Control *cpu ); #endif /** @see _Scheduler_Node_initialize() */ void ( *node_initialize )( const Scheduler_Control *, Scheduler_Node *, Thread_Control *, Priority_Control ); /** @see _Scheduler_Node_destroy() */ void ( *node_destroy )( const Scheduler_Control *, Scheduler_Node * ); /** @see _Scheduler_Release_job() */ void ( *release_job ) ( const Scheduler_Control *, Thread_Control *, Priority_Node *, uint64_t, Thread_queue_Context * ); /** @see _Scheduler_Cancel_job() */ void ( *cancel_job ) ( const Scheduler_Control *, Thread_Control *, Priority_Node *, Thread_queue_Context * ); /** @see _Scheduler_Start_idle() */ void ( *start_idle )( const Scheduler_Control *, Thread_Control *, struct Per_CPU_Control * ); #if defined(RTEMS_SMP) /** @see _Scheduler_Set_affinity() */ Status_Control ( *set_affinity )( const Scheduler_Control *, Thread_Control *, Scheduler_Node *, const Processor_mask * ); #endif } Scheduler_Operations;

针对结构体的成员,我们知道需要实现上述的函数回调,这些回调有RTEMS的应用程序和系统下发。下面主要介绍其调度的场景

二、调度器初始化

_Scheduler_Handler_initialization作为调度器的初始化函数,在RTEMS初始化流程中,我们知道rtems_initialize_data_structures会初始化调度器,其函数为_Scheduler_Handler_initialization,这里会根据_Scheduler_Table的值,对每种调度器进行初始化。

三、执行调度

_Scheduler_Schedule可以标记当前线程可以调度,这种情况下,修改当前线程的heir,并设置需要被调度。然后等待系统调度

四、主动让出

_Scheduler_Yield作为让出调度的函数,提供给用户程序用来主动让出当前任务,让调度器选择下一个任务

五、阻塞任务调度

_Scheduler_Block函数可以主动阻塞当前任务,并将自身任务从就绪队列移除,然后修改自身的heir,等待调度

六、恢复阻塞任务调度

_Scheduler_Unblock_Scheduler_Block成对,将自身添加到就绪队列中,然后修改自身的heir,等待调度

七、更新任务优先级

_Scheduler_Update_priority会遍历调度器的所有任务,修改其任务优先级。

八、映射优先级到调度器

_Scheduler_Map_priority将指定的优先级从应用程序域映射到调度器域

九、映射优先级到应用

_Scheduler_Unmap_priority将指定的优先级从调度器域映射到应用程序域

十、任务的调度器节点初始化/销毁

_Scheduler_Node_initialize在线程创建时,默认初始化本线程的调度节点

_Scheduler_Node_destroy在线程销毁时,销毁线程的调度节点

十一、周期任务创建和销毁

_Scheduler_Release_job将任务释放出来,用作周期任务

_Scheduler_Cancel_job将周期任务取消,用作周期任务

十二、启动空闲任务

_Scheduler_Start_idle为调度器启动空闲任务

编辑
2025-03-30
记录知识
0

ukui-window-switch是麒麟系统上的后台任务管理工具,其实际上是通过kwin来加载的特效动态库。最近我们的git仓库都通过git上ci,由ci来执行构建,但是发现ukui-window-switch软件包在本地构建正常,在ci上构建后功能异常,本文根据此问题介绍排查思路和解决办法

一、ukui-window-switch是运行机制

为了了解此问题,我们需要清楚ukui-window-switch的运行机制,对于ukui-window-switch包,其内容如下:

root@kylin:~/1# dpkg -L ukui-window-switch /. /usr /usr/bin /usr/bin/ukui-window-switch /usr/lib /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu/qt5 /usr/lib/aarch64-linux-gnu/qt5/plugins /usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin /usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin/effects /usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin/effects/plugins /usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin/effects/plugins/libwindowsview.so /usr/share /usr/share/doc /usr/share/doc/ukui-window-switch /usr/share/doc/ukui-window-switch/changelog.Debian.gz /usr/share/doc/ukui-window-switch/copyright /usr/share/kservices5 /usr/share/kservices5/ukui-kwin /usr/share/kservices5/ukui-kwin/kwin4_window_switcher_thumbnail_grid.desktop /usr/share/ukui-kwin /usr/share/ukui-kwin/tabbox /usr/share/ukui-kwin/tabbox/thumbnail_grid /usr/share/ukui-kwin/tabbox/thumbnail_grid/contents /usr/share/ukui-kwin/tabbox/thumbnail_grid/contents/ui /usr/share/ukui-kwin/tabbox/thumbnail_grid/contents/ui/main.qml /usr/share/ukkcoreaddons/ui-kwin/tabbox/thumbnail_grid/metadata.desktop

可以发现,其关键点是libwindowsview.so的动态库文件。而加载此动态库的程序是kwin,所以我们留意kwin的代码如下:

QList<KPluginMetaData> ScriptedEffectLoader::findAllEffects() const { #if defined(QT_NO_DEBUG) QString packageRoot = QStringLiteral("ukui-kwin/effects"); #else QString packageRoot = kwinApp()->applicationDirPath() + QLatin1String("/../../effects"); if (access(packageRoot.toStdString().c_str(), F_OK) == -1) packageRoot = QStringLiteral("ukui-kwin/effects"); qDebug() << "Load effects from:" << packageRoot; #endif return KPackage::PackageLoader::self()->listPackages(s_serviceType, packageRoot); }

而对于effect的load,则在loadEffect函数

bool PluginEffectLoader::loadEffect(const QString &name) { const auto info = findEffect(name); if (!info.isValid()) { return false; } return loadEffect(info, LoadEffectFlag::Load); }

此时我们关注factory函数,如下:

EffectPluginFactory *PluginEffectLoader::factory(const KPluginMetaData &info) const { if (!info.isValid()) { return nullptr; } QString fileName = info.fileName(); if (0 == info.pluginId().compare("UKUI-KWin-Windows-View")) { QString tmpFile = qEnvironmentVariableIsSet("UKUI-KWin-Windows-View_LIBRARY") ? qgetenv("UKUI-KWin-Windows-View_LIBRARY") : info.fileName(); if (QFile::exists(tmpFile)) fileName = tmpFile; } KPluginLoader loader(fileName); if (loader.pluginVersion() != KWIN_EFFECT_API_VERSION) { qDebug() << info.pluginId() << " has not matching plugin version, expected " << KWIN_EFFECT_API_VERSION << "got " << loader.pluginVersion(); return nullptr; } KPluginFactory *factory = loader.factory(); if (!factory) { qDebug() << "Did not get KPluginFactory for " << info.pluginId(); return nullptr; } return dynamic_cast< EffectPluginFactory* >(factory); }

根据上面的代码,我们可以知道其so的加载过程,接下来我们查看报错信息

二、报错信息

我们知道ukui_kwin的错误日志在ukui_kwin_0.log中,所以找到错误如下:

"UKUI-KWin-Windows-View" has not matching plugin version, expected 229 got 4294967295

这里我们可以发现,我们期望的so的version是229,但是得到的是4294967295。我们计算4294967295的值是0xffffffff

而针对代码,我们留意宏如下:

KWIN_EFFECT_API_VERSION

然后留意函数

loader.pluginVersion()

三、排查问题

对于kwin,我们知道其KWIN_EFFECT_API_VERSION是229,而对于ukui-window-switch,我们需要找到so的version填入代码,如下

windowsview/multitaskviewmanagerpluginfactory.cpp
class MultitaskViewManagerPluginFactory : public KWin::EffectPluginFactory { Q_OBJECT Q_INTERFACES(KPluginFactory) Q_PLUGIN_METADATA(IID KPluginFactory_iid FILE "windowsview.json") public: MultitaskViewManagerPluginFactory() {} ~MultitaskViewManagerPluginFactory() override {} KWin::Effect* createEffect() const override { return new MultitaskView::MultitaskViewManager(); } }; K_EXPORT_PLUGIN_VERSION(KWIN_EFFECT_API_VERSION)

这里我们看到K_EXPORT_PLUGIN_VERSION会导出KWIN_EFFECT_API_VERSION

回到kwin。我们也同样查看定义 K_EXPORT_PLUGIN_VERSION

#define KWIN_EFFECT_API_MAKE_VERSION( major, minor ) (( major ) << 8 | ( minor )) #define KWIN_EFFECT_API_VERSION_MAJOR 0 #define KWIN_EFFECT_API_VERSION_MINOR 229 #define KWIN_EFFECT_API_VERSION KWIN_EFFECT_API_MAKE_VERSION( \ KWIN_EFFECT_API_VERSION_MAJOR, KWIN_EFFECT_API_VERSION_MINOR ) K_EXPORT_PLUGIN_VERSION(quint32(KWIN_EFFECT_API_VERSION))

这里我们注意K_EXPORT_PLUGIN_VERSION宏的实现,如下

/** * \relates KPluginLoader * Use this macro if you want to give your plugin a version number. * You can later access the version number with KPluginLoader::pluginVersion() */ #define K_EXPORT_PLUGIN_VERSION(version) \ Q_EXTERN_C Q_DECL_EXPORT const quint32 kde_plugin_version = version;

这里可以发现,K_EXPORT_PLUGIN_VERSION实际上是const关键字的kde_plugin_version。

现在的疑问是ukui-kwin和ukui-window-switch的定义一模一样,而且都是229,为什么会出现问题呢。

3.1 查看静态变量的真实值

我们虽然代码都确定了是229,但是我们报错信息很明显是一个229.一个是0xffffffff。我们现在发现了kde_plugin_version是一个const值,所以可以通过二进制工具直接查看值大小。

对于本地编译的so,我们查找kde_plugin_version的真实值

# readelf -s libwindowsview.so | grep kde_plugin_version 491: 0000000000039e6c 4 OBJECT GLOBAL DEFAULT 13 kde_plugin_version

此时得到0x0000000000039e6c的偏移地址,然后objdump这个动态库如下:

# objdump -s libwindowsview.so | grep "^ 39e" 39e0 00000000 00000000 0b9d0000 12000000 ................ 39e08 4d756c74 69746173 6b566965 774d616e MultitaskViewMan 39e18 61676572 506c7567 696e4661 63746f72 agerPluginFactor 39e28 79000000 00000000 08000000 00000000 y............... 39e38 00000000 00000000 00000000 00000000 ................ 39e48 00000000 00000000 00000000 00000000 ................ 39e58 00000000 00000000 00000000 00000000 ................ 39e68 00000000 e5000000 001b00ec 37fd0075 ............7..u 39e78 006b0075 0069002d 00770069 006e0064 .k.u.i.-.w.i.n.d 39e88 006f0077 002d0073 00770069 00740063 .o.w.-.s.w.i.t.c 39e98 0068005f 007a0068 005f0043 004e002e .h._.z.h._.C.N.. 39ea8 0071006d 001b07ec 323d0075 006b0075 .q.m....2=.u.k.u 39eb8 0069002d 00770069 006e0064 006f0077 .i.-.w.i.n.d.o.w 39ec8 002d0073 00770069 00740063 0068005f .-.s.w.i.t.c.h._ 39ed8 0062006f 005f0043 004e002e 0071006d .b.o._.C.N...q.m 39ee8 00060703 7dc30069 006d0061 00670065 ....}..i.m.a.g.e 39ef8 00730003 0000783c 0071006d 006c000f .s....x<.q.m.l..

我们发现0x0000000000039e6c的值是0x000000e5,也就是229。

对于ci编译的so,我们查找kde_plugin_version的值:

# readelf -s libwindowsview.so | grep kde_plugin_version 660: 000000000003a8e4 4 OBJECT GLOBAL DEFAULT 13 kde_plugin_version

可以看到,其偏移值是0x000000000003a8e4,然后我们将其objdump出来

# objdump -s libwindowsview.so | grep "^ 3a8" 3a80 00000000 00000000 18280000 12000000 .........(...... 3a800 56003100 30000000 6f72672e 756b7569 V.1.0...org.ukui 3a810 2e4b5769 6e000000 2f4d756c 74697461 .KWin.../Multita 3a820 736b5669 65770000 6f72672e 6b64652e skView..org.kde. 3a830 4b506c75 67696e46 6163746f 72790000 KPluginFactory.. 3a840 33334d75 6c746974 61736b56 6965774d 33MultitaskViewM 3a850 616e6167 6572506c 7567696e 46616374 anagerPluginFact 3a860 6f727900 00000000 ffffffff 21000000 ory.........!... 3a870 00000000 00000000 18000000 00000000 ................ 3a880 4d756c74 69746173 6b566965 774d616e MultitaskViewMan 3a890 61676572 506c7567 696e4661 63746f72 agerPluginFactor 3a8a0 79000000 00000000 08000000 00000000 y............... 3a8b0 00000000 00000000 00000000 00000000 ................ 3a8c0 00000000 00000000 00000000 00000000 ................ 3a8d0 00000000 00000000 00000000 00000000 ................ 3a8e0 00000000 e5000000 001b00ec 37fd0075 ............7..u 3a8f0 006b0075 0069002d 00770069 006e0064 .k.u.i.-.w.i.n.d

我们找到0x000000000003a8e4 的值是0x000000e5其十进制也是229

所以我们知道,这个问题和编译构建没关系,二进制生成出来都是229,那问题出在加载时的匹配逻辑上,那个0xffffffff应该是被强制设置的。

3.2. 确定运行时加载so时version的值

为了确认运行时的函数,我们需要留意如下:

KPluginLoader loader(fileName); if (loader.pluginVersion() != KWIN_EFFECT_API_VERSION) {

我们关注类KPluginLoader

其实现在kcoreaddons的src/lib/plugin/kpluginloader.cpp,如下

quint32 KPluginLoader::pluginVersion() { Q_D(const KPluginLoader); if (!load()) { return qint32(-1); } return d->pluginVersion; }

可以发现,确实被人设置为-1了,正好-1就是0xffffffff那就证明了load()函数失败了。

我们留意这个load函数

bool KPluginLoader::load() { Q_D(KPluginLoader); if (!d->loader->load()) { return false; } if (d->pluginVersionResolved) { return true; } Q_ASSERT(!fileName().isEmpty()); QLibrary lib(fileName()); Q_ASSERT(lib.isLoaded()); // already loaded by QPluginLoader::load() // TODO: this messes up KPluginLoader::errorString(): it will change from unknown error to could not resolve kde_plugin_version quint32 *version = reinterpret_cast<quint32 *>(lib.resolve("kde_plugin_version")); if (version) { d->pluginVersion = *version; } else { d->pluginVersion = ~0U; } d->pluginVersionResolved = true; return true; }

我们可以知道一定是d->loader→load()返回失败,我们注意这个loader的类型,如下:

class KPluginLoaderPrivate { Q_DECLARE_PUBLIC(KPluginLoader) protected: KPluginLoaderPrivate(const QString &libname) : name(libname), loader(nullptr), pluginVersion(~0U), pluginVersionResolved(false) {} ~KPluginLoaderPrivate() {} KPluginLoader *q_ptr; const QString name; QString errorString; QPluginLoader *loader; quint32 pluginVersion; bool pluginVersionResolved; };

可以发现loader是QPluginLoader *loader;

至此我们知道了是qt的plugin加载时存在问题,导致229被强制赋值为-1.

3.3 调试QT_DEBUG_PLUGINS

为了能够监听到ukui-kwin加载时load动态库的过程,也就是查看QPluginLoader 的加载过程,我们有一个宏配置可以查看,如下测试验证:

kill -9 $(pidof /usr/bin/ukui-kwin_x11) QT_DEBUG_PLUGINS=1 /usr/bin/ukui-kwin_x11

此时我们的日志在/home/kylin/.log/ukui_kwin_0.log

我们得到如下信息:

250211 09:56:08.820 Debug[19478]: 无法加载库/usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin/effects/plugins/libwindowsview.so:(/usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin/effects/plugins/libwindowsview.so: undefined symbol: glXGetFBConfigAttrib) 250211 09:56:08.821 Warning[19478]: QLibraryPrivate::loadPlugin failed on "/usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin/effects/plugins/libwindowsview.so" : "无法加载库/usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin/effects/plugins/libwindowsview.so:(/usr/lib/aarch64-linux-gnu/qt5/plugins/ukui-kwin/effects/plugins/libwindowsview.so: undefined symbol: glXGetFBConfigAttrib)" 250211 09:56:08.821 Debug[19478]: "UKUI-KWin-Windows-View" has not matching plugin version, expected 229 got 4294967295 250211 09:56:08.821 Debug[19478]: Couldn't get an EffectPluginFactory for: "UKUI-KWin-Windows-View"

我们抓住关键信息:undefined symbol: glXGetFBConfigAttrib

我们可以知道这个是glx相关的函数,但是我们的系统使用的是glesv2,故我们可以屏蔽。

四、解决

我们找到glXGetFBConfigAttrib的调动地址如下:

grep -nr glXGetFBConfigAttrib 匹配到二进制文件 obj-aarch64-linux-gnu/windowsview/CMakeFiles/windowsview.dir/glxtexturehandler.cpp.o windowsview/glxtexturehandler.cpp:319: glXGetFBConfigAttrib(dpy, configs[i], GLX_RED_SIZE, &red); windowsview/glxtexturehandler.cpp:320: glXGetFBConfigAttrib(dpy, configs[i], GLX_GREEN_SIZE, &green); windowsview/glxtexturehandler.cpp:321: glXGetFBConfigAttrib(dpy, configs[i], GLX_BLUE_SIZE, &blue); windowsview/glxtexturehandler.cpp:327: glXGetFBConfigAttrib(dpy, configs[i], GLX_VISUAL_ID, (int *) &visual); windowsview/glxtexturehandler.cpp:333: glXGetFBConfigAttrib(dpy, configs[i], GLX_BIND_TO_TEXTURE_RGBA_EXT, &bind_rgba); windowsview/glxtexturehandler.cpp:334: glXGetFBConfigAttrib(dpy, configs[i], GLX_BIND_TO_TEXTURE_RGB_EXT, &bind_rgb); windowsview/glxtexturehandler.cpp:340: glXGetFBConfigAttrib(dpy, configs[i], GLX_BIND_TO_TEXTURE_TARGETS_EXT, &texture_targets); windowsview/glxtexturehandler.cpp:346: glXGetFBConfigAttrib(dpy, configs[i], GLX_DEPTH_SIZE, &depth); windowsview/glxtexturehandler.cpp:347: glXGetFBConfigAttrib(dpy, configs[i], GLX_STENCIL_SIZE, &stencil);

这里可以发现windowsview/glxtexturehandler.cpp会调用libgl.so的api。引入此符号的原因是glxtexturehandler.cpp.o被成功链接到libwindowsview.so中了,但实际上我们并不需要。我们查看构建日志如下:

https://dev.kylinos.cn/+librarian/14396573/buildlog_kylin-desktop-v101-arm64.ukui-window-switch_3.1.0.1-0k0.1tablet8rk1.egf0.1build1_BUILDING.txt.gz

我们留意ld这一步,如下:

image.png

可以发现其可重定向文件.o被引用到so中。

根据cmakelists.txt的描述,我们可以根据HAVE_GLX来判断构建时是否加入

set(SRCS abstracthandler.cpp concretetexturehandler.cpp glxtexturehandler.cpp egltexturehandler.cpp windowthumbnail.cpp desktopbackground.cpp multitaskviewmanagerpluginfactory.cpp ) # glxtexturehandler.cpp is discarded when HAVE_GLX is not set if (${HAVE_GLX}) list(APPEND SRCS glxtexturehandler.cpp) endif() # translation find_package(QT NAMES Qt6 Qt5 COMPONENTS LinguistTools REQUIRED) find_package(Qt${QT_VERSION_MAJOR} COMPONENTS LinguistTools REQUIRED)

修改之后,我们再次构建如下:

image.png

可以发现ld链接libwindowsview.so的时候,不会加入glxtexturehandler.cpp.o了。此问题解决。

编辑
2025-03-30
记录知识
0

系统调用eventfd介绍

eventfd是一个利用匿名文件设计的系统调用,用作高效的进程间通信,本文介绍一下eventfd的内核实现和用户测试。方便后续编程时可以考虑使用eventfd

内核实现

eventfd是通过syscall实现,如下

SYSCALL_DEFINE2(eventfd2, unsigned int, count, int, flags) { return do_eventfd(count, flags); } SYSCALL_DEFINE1(eventfd, unsigned int, count) { return do_eventfd(count, 0); }

其实现如下:

static int do_eventfd(unsigned int count, int flags) { struct eventfd_ctx *ctx; struct file *file; int fd; /* Check the EFD_* constants for consistency. */ BUILD_BUG_ON(EFD_CLOEXEC != O_CLOEXEC); BUILD_BUG_ON(EFD_NONBLOCK != O_NONBLOCK); if (flags & ~EFD_FLAGS_SET) return -EINVAL; ctx = kmalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) return -ENOMEM; kref_init(&ctx->kref); init_waitqueue_head(&ctx->wqh); ctx->count = count; ctx->flags = flags; ctx->id = ida_simple_get(&eventfd_ida, 0, 0, GFP_KERNEL); flags &= EFD_SHARED_FCNTL_FLAGS; flags |= O_RDWR; fd = get_unused_fd_flags(flags); if (fd < 0) goto err; file = anon_inode_getfile("[eventfd]", &eventfd_fops, ctx, flags); if (IS_ERR(file)) { put_unused_fd(fd); fd = PTR_ERR(file); goto err; } file->f_mode |= FMODE_NOWAIT; fd_install(fd, file); return fd; err: eventfd_free_ctx(ctx); return fd; }

do_eventfd比较重要的点在于anon_inode_getfile,这里通过匿名页来设置此系统调用。

再重要的就是eventfd_ctx结构,如下:

struct eventfd_ctx { struct kref kref; wait_queue_head_t wqh; /* * Every time that a write(2) is performed on an eventfd, the * value of the __u64 being written is added to "count" and a * wakeup is performed on "wqh". A read(2) will return the "count" * value to userspace, and will reset "count" to zero. The kernel * side eventfd_signal() also, adds to the "count" counter and * issue a wakeup. */ __u64 count; unsigned int flags; int id; };

这里看到了我们read和write作用的是count值,所以这个fd只能通过count来传递信息。而read/write是通过标准的fops实现,如下

static const struct file_operations eventfd_fops = { #ifdef CONFIG_PROC_FS .show_fdinfo = eventfd_show_fdinfo, #endif .release = eventfd_release, .poll = eventfd_poll, .read_iter = eventfd_read, .write = eventfd_write, .llseek = noop_llseek, };

read操作的核心实现是eventfd_ctx_do_read,这里如果是flag设置了semaphore则只会减1,否则可以直接是count值,如下:

void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt) { lockdep_assert_held(&ctx->wqh.lock); *cnt = ((ctx->flags & EFD_SEMAPHORE) && ctx->count) ? 1 : ctx->count; ctx->count -= *cnt; }

write操作直接在eventfd_write中,每次write会自增,也就是

ctx->count += ucnt

poll操作根据poll_wait来等待,READ_ONCE来保证count的读取只有一次,根据注释我们可以知道,wqh锁和qwh锁不会竞争问题,也就是安全的。

* poll write * ----------------- ------------ * lock ctx->wqh.lock (in poll_wait) * count = ctx->count * __add_wait_queue * unlock ctx->wqh.lock * lock ctx->qwh.lock * ctx->count += n * if (waitqueue_active) * wake_up_locked_poll * unlock ctx->qwh.lock * eventfd_poll returns 0

其代码实现如下

static unsigned int eventfd_poll(struct file *file, poll_table *wait) { struct eventfd_ctx *ctx = file->private_data; unsigned int events = 0; u64 count; poll_wait(file, &ctx->wqh, wait); count = READ_ONCE(ctx->count); if (count > 0) events |= POLLIN; if (count == ULLONG_MAX) events |= POLLERR; if (ULLONG_MAX - 1 > count) events |= POLLOUT; return events; }

而对于内核空间对eventfd的调用,可以通过eventfd_signal函数,其实现是eventfd_signal_mask,这里同样是自加如下:

__u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, unsigned mask) { unsigned long flags; /* * Deadlock or stack overflow issues can happen if we recurse here * through waitqueue wakeup handlers. If the caller users potentially * nested waitqueues with custom wakeup handlers, then it should * check eventfd_signal_count() before calling this function. If * it returns true, the eventfd_signal() call should be deferred to a * safe context. */ if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) return 0; spin_lock_irqsave(&ctx->wqh.lock, flags); this_cpu_inc(eventfd_wake_count); if (ULLONG_MAX - ctx->count < n) n = ULLONG_MAX - ctx->count; ctx->count += n; if (waitqueue_active(&ctx->wqh)) wake_up_locked_poll(&ctx->wqh, EPOLLIN | mask); this_cpu_dec(eventfd_wake_count); spin_unlock_irqrestore(&ctx->wqh.lock, flags); return n; }

应用测试

为了使用eventfd,我们可以直接使用c库封装的eventfd函数,示例如下:

#include <sys/eventfd.h> #include <unistd.h> #include <stdint.h> #include <stdio.h> int main() { int efd; uint64_t value; efd = eventfd(0, 0); if (efd == -1) { perror("eventfd"); return 1; } value = 1; if (write(efd, &value, sizeof(value)) == -1) { perror("write"); return 1; } if (write(efd, &value, sizeof(value)) == -1) { perror("write"); return 1; } if (read(efd, &value, sizeof(value)) == -1) { perror("read"); return 1; } printf("[kylin]: read value: %lu\n", value); close(efd); return 0; }

运行后结果如下:

[kylin]: read value: 2

至此eventfd介绍完成,详细大家在使用高性能的进程通信的时候,可以适当考虑eventfd

编辑
2025-03-29
记录知识
0

GICv3中断简介

GICv3中断是GICv2的改进,本文主要基于GICv2,简单讨论GICv3的改进内容

中断类型

我们知道GICv2默认支持如下中断类型:

  • SPI(Software Generated Interrupt) 软件生成中断,其目的是对多核之间发送的软件中断信号
  • PPI(Private Peripheral Interrupt) 私有外设中断,此中断是某个CPU核心独有的中断,如定时器
  • SPI(Shared Peripheral Interrupt) 共享外设中断,所有外设中断,可以被所有CPU响应

在GICv3中,新增了一个中断类型

  • LPI(Locality-specific Peripheral Interrupt)本地特殊外设中断,Message-Based的中断类型

中断号分配

GICv2的中断类型,如下

中断类型中断号
SGI0-15
PPI16-31
SPI32-1019
Reserved1020-1023

而GICv3的中断号相比于GICv2增加了LPI和一些扩展,如下

中断类型中断号
SGI0-15
PPI16-31
SPI32-1019
Reserved1020-8191
LPI8192-Implementation defined

ITS

为了支持Message-Based的中断LPI,GICv3的中断控制器引入了ITS组件,ITS的作用是将Device_id的Event_ID转换成INTID(LPI硬件中断号),然后通过查表转换成对应的Redistributor,最后转发给CPU上。

MSI

PCIe总线协议支持消息中断MSI,这个实现是通过GICv3的ITS实现的