The message:
vbnet
Copy
Edit
W: _lws_smd_msg_send: rejecting message on queue depth 40
is a warning from the libwebsockets (LWS) system message dispatcher (smd) subsystem. Let’s break it down:
🔍 Meaning
W: → Warning log level.
_lws_smd_msg_send → Internal libwebsockets function that tries to send system messages.
rejecting message → The message wasn’t sent.
on queue depth 40 → The internal message queue already has 40 pending messages, and that’s the maximum queue depth, so the new message was dropped.
💡 Why It Happens
This occurs when:
The system is generating too many messages too quickly.
The receiving end (your UI or web interface) isn’t processing them fast enough.
The message queue fills up, and libwebsockets starts rejecting new messages to avoid overflow.
🧠 Context (ScreenBeam ECB7250)
On MoCA adapters like the ECB7250, this warning can happen if:
The web UI is left open for a long time while generating background updates.
You’re rapidly refreshing pages or polling status data.
There's debug or diagnostic traffic backing up the internal LWS queue.
✅ Is it a problem?
No, not usually. It’s a non-fatal warning. Messages like this are dropped gracefully and the adapter continues running normally.
However, if you're seeing:
Lag or freezing in the web UI
Status not updating
Repeated log floods of this message
…then it’s worth:
Restarting the web interface or device
Closing any inactive browser tabs using the web UI
Ensuring your browser/device isn’t overloading it with polling requests
@Alta-MikeD just wanted to make you aware of this random issue on one of my route 10’s
id be happy to invite you to the site to troubleshoot
I’m happy to take a look if you’d like me to.
Please note that version of top doesn’t support the per-core toggle, which would better reflect load use in case one core was working much harder than others.
I am curious about the mwan3rtmon
process though. So, question, do you have IPv6 enabled on your WANs? If so, please note that we only support enabling it on one WAN total (at a time) per Route10. If you have it enabled on both wan interfaces, on one or both Route10s, it could be something as simple as that causing the unexpected CPU use.
I don’t see suricata so unless you’ve manually disabled offloading or flow control (whichh in turn disables offloading), then CPU use should still be low. So at a glance, that’s why I’m wondering about IPv6.
As for the SFP interface.. what kind of module is that? Have you tried just setting a static interface speed?