Blog: Google SoC lwIP Report Week 10
Moderator: Moderator Team
-
- Posts: 32
- Joined: Sat May 23, 2015 5:54 pm
Re: Blog: Google SoC lwIP Report Week 10
Getting error loading the driver (compiled with GCC)
[ external image ]
[ external image ]
Re: Blog: Google SoC lwIP Report Week 10
Hi, please take a backtrace (command: bt at the kernel debugger prompt), press enter, then take screenshots Thanks in advance!
-
- Posts: 32
- Joined: Sat May 23, 2015 5:54 pm
Re: Blog: Google SoC lwIP Report Week 10
I don't think Dian's bothered building with GCC since he started using windbg in order to debug the driver.
Re: Blog: Google SoC lwIP Report Week 10
Last commit says "mostly fully working TCP". Is multi-threading and thread safety work finished? What needs to be done further?
ReactOS Community Changelogs | Admin of ReactOS Official Twitter Account | Admin of ReactOS Turkish Twitter Account @reactos_tr
-
- Posts: 1790
- Joined: Fri Aug 07, 2009 5:11 am
- Location: USA
Re: Blog: Google SoC lwIP Report Week 10
To be honest, we might need a bounty to get someone to build our own IP stack and all. I mean, everyone keeps writing wrappers for LWIP and everything gets bugged because it seems we are using LWIP in ways it was not intended to be used. It wasn't intended for multi-threaded applications. If all the time that was spent on getting it to work for us were spent on making our own network stack and drivers, I think we'd be better off by now. Really, if someone knows what they are doing, they could rewrite parts of LWIP to be thread safe, but it sounds like, from what I read, that writing from scratch would have been easier.
Re: Blog: Google SoC lwIP Report Week 10
I've been going through a lot of deadlock fixes for the last two days. I introduced new ones in the latest commit that's on SVN, but the one before that one actually works quite well. The latest commit was an attempt to fix cancellation issues. I've been working since then, and this issue you're seeing may or may not have been fixed already.
Re: Blog: Google SoC lwIP Report Week 10
Can you give references so I could count these people?PurpleGurl wrote:I mean, everyone keeps writing wrappers for LWIP and everything gets bugged
Well, the fundamental tasks (of parallelism) would be essentially the same, I guess. Thus IMO trying lwIP is quite a way to know what ReOS really needs at the core of the stack.PurpleGurl wrote:because it seems we are using LWIP in ways it was not intended to be used. It wasn't intended for multi-threaded applications. If all the time that was spent on getting it to work for us were spent on making our own network stack and drivers, I think we'd be better off by now.
Or maybe just use another stack which is more parallel-friendly, m:)?PurpleGurl wrote:Really, if someone knows what they are doing, they could rewrite parts of LWIP to be thread safe, but it sounds like, from what I read, that writing from scratch would have been easier.
See below, please.
Re: Blog: Google SoC lwIP Report Week 10
Some weeks ago, being inspired by the experiments and blog entries of zhu48, I decided to search for alternatives of lwIP to see how do they deal with the same task. I found several stacks -- mostly targeted at embedded systems and being only partial or very minimalistic, though.
One of them caught my sight, however -- it's called mTCP. Although it's targeted at multiCPUs ..:
... it seems to know and handle parallelism from the beginning:
Of course, the project seems to be quite large -- as of July 2016 it contained around 2.4 MLoC. Though in March 2015 it contained only 400 kLoC. So this already compares to lwIP which contained around 150 kLoC as of June 2016.
And then it requries what I suppose to be some high performance libraries -- namely libps or libnuma:
But OTOH may it be that it would help only reading it's code / the article to see how does mTCP do it's synchronization things?
I address this question to zhu48 and maybe his mentor Art Yerkes (primarily).
One of them caught my sight, however -- it's called mTCP. Although it's targeted at multiCPUs ..:
This work presents mTCP, a high-performance userlevel TCP stack for multicore systems.
... it seems to know and handle parallelism from the beginning:
The source code: https://github.com/eunyoung14/mtcpmTCP addresses the inefficiencies from the ground up—from packet I/O and TCP connection management to the application interface. In addition to adopting well-known techniques, our design
(1) translates multiple expensive system calls into a single shared memory reference,
(2) allows efficient flowlevel event aggregation, and
(3) performs batched packet I/O for high I/O efficiency.
Our evaluations on an 8-core machine showed that mTCP improves the performance of small message transactions by a factor of 25 compared to the latest Linux TCP stack [...] It also improves the performance of various popular applications by 33% to 320% compared to those on the Linux stack.
Of course, the project seems to be quite large -- as of July 2016 it contained around 2.4 MLoC. Though in March 2015 it contained only 400 kLoC. So this already compares to lwIP which contained around 150 kLoC as of June 2016.
And then it requries what I suppose to be some high performance libraries -- namely libps or libnuma:
Thus it may be an overkill or even unacceptable for ReactOS.We require the following libraries to run mTCP.
- libps (PacketShader I/O engine library) OR libdpdk (Intel's DPDK package*)
- libnuma
- libpthread
- librt
But OTOH may it be that it would help only reading it's code / the article to see how does mTCP do it's synchronization things?
I address this question to zhu48 and maybe his mentor Art Yerkes (primarily).
Re: Blog: Google SoC lwIP Report Week 10
Interesting. I will take a look at this thing, see how it compares.
Something to note, lwIP 2.0.0 was recently released with better multithreading support in its netconn API. It is, however, still a big global lock approacand not designed from the ground up to be thread safe.
Something to note, lwIP 2.0.0 was recently released with better multithreading support in its netconn API. It is, however, still a big global lock approacand not designed from the ground up to be thread safe.
-
- Posts: 1790
- Joined: Fri Aug 07, 2009 5:11 am
- Location: USA
Re: Blog: Google SoC lwIP Report Week 10
I didn't literally mean "everyone," just the ROS devs who worked with it. My point is that it would likely be easier to write the stack and drivers from scratch. I'm not a references or sources person.
I too got to researching, and found that this is a problem for others who use it. They tried to make it platform independent and thus left out this sort of handling, leaving that for those who actually use their code.
Now, mTCP sounds good, but it is likely too resource heavy for us and may rely on code we may not want to include. That said, it may be worth studying it.
I too got to researching, and found that this is a problem for others who use it. They tried to make it platform independent and thus left out this sort of handling, leaving that for those who actually use their code.
Now, mTCP sounds good, but it is likely too resource heavy for us and may rely on code we may not want to include. That said, it may be worth studying it.
Last edited by PurpleGurl on Wed Aug 17, 2016 8:26 pm, edited 1 time in total.
-
- Posts: 10
- Joined: Thu Jul 19, 2012 2:39 pm
Re: Blog: Google SoC lwIP Report Week 10
In this paper i can read clearly 2 things.saulius2 wrote:Some weeks ago, being inspired by the experiments and blog entries of zhu48, I decided to search for alternatives of lwIP to see how do they deal with the same task. I found several stacks -- mostly targeted at embedded systems and being only partial or very minimalistic, though.
One of them caught my sight, however -- it's called mTCP...
-Is an user mode library :-/
-Is focused in *nix... again :-/
So is a no-No... There's a reason why LWIP was choosen over *nix alternatives like OSkit BSD TCPIP stack (ROS used to have OSKit as TCPIP stack before it was ditched in favor of LWIP, and even MS ditched it in favor their own). Simply speaking... Stuff with *nix architecture in mind doesn't play well with NT, as it is, and maintaining wrapper patches for these things is a burden in the best cases, a nightmare in the worst cases... At least LWIP is somewhat platform agnostic.
So, as long as there are a better, more NT friendly alternative, or someone want to jump and write an TCPIP stack tailored better to the NT NDIS model, just like MS did, LWIP seems the most sane choice, even with its bundled problems.
Re: Blog: Google SoC lwIP Report Week 10
mTCP looks like a highly involved and highly optimized TCP implementation targeted towards POSIX systems. To integrate it transparently into ReactOS as the primary TCP driver seems to me like a larger task than doing our own implementation of their optimization techniques. Many of mTCP's optimizations involve CPU core locality and batched I/O and system calls. As such, mTCP would likely include code that replaces functionality implemented in AFD/TDI, not just the low level TCP protocol driver that interfaces with AFD/TDI. It may even involve changes to the kernel itself. The specific focus towards NUMA and use of POSIX threads, I imagine, are big problems (overkill for NUMA, compatibility for pThreads) for integrating into ReactOS. Even if we can't directly use the code, it's probably worth reading through as a reference though.
Re: Blog: Google SoC lwIP Report Week 10
PurpleGurl, I didn't either. My idea was to find out responses from previous tries. Knowing count of it would help to build a general picture. But maybe Google will answer me. So far I found only posts from Cameron Gutman circa 2011.PurpleGurl wrote:I didn't literally mean "everyone," just the ROS devs who worked with it. My point is that it would likely be easier to write the stack and drivers from scratch. I'm not a references or sources person.
raijinzrael and zhu48, thank you for insights. Had I be a ReOS veteran user, maybe I could have known the answers already. Some more questions follow.
OK, I easily accept the second issue.raijinzrael wrote:In this paper i can read clearly 2 things.
-Is an user mode library :-/
-Is focused in *nix... again :-/
[...] Simply speaking... Stuff with *nix architecture in mind doesn't play well with NT [...] At least LWIP is somewhat platform agnostic.
But how much is the first one relevant to ReactOS?
zhu48, thanks for linking some details to others. The picture gets clearer.zhu48 wrote:mTCP looks like [...] implementation targeted towards POSIX systems. To integrate it transparently into ReactOS as the primary TCP driver seems to me like a larger task than doing our own implementation of their optimization techniques. Many of mTCP's optimizations involve CPU core locality and batched I/O and system calls. As such, mTCP would likely include code that replaces functionality implemented in AFD/TDI, not just the low level TCP protocol driver that interfaces with AFD/TDI. It may even involve changes to the kernel itself.
Right, this is why I tried to start by mentioning NUMA as an overkill.zhu48 wrote:The specific focus towards NUMA and use of POSIX threads, I imagine, are big problems (overkill for NUMA, compatibility for pThreads) for integrating into ReactOS.
But I didn't think about pthreads. I thought mTCP implements own synchronization of multitreadead / parallel operations. Now I guess they use only phtreads to do that.
I tried to accent namely this -- maybe you or someone else could get some nice ideas for handling concurency. Eg. the use of Shared Memory seemed to me handy in this aspect. I am sorry if I failed.zhu48 wrote:Even if we can't directly use the code, it's probably worth reading through as a reference though.
Wish you best luck in bringing clarity on the subject! And have a great weekend:)
Who is online
Users browsing this forum: No registered users and 52 guests