Blog: Google SoC lwIP Report Week 10

Here you can discuss ReactOS related topics.

Moderator: Moderator Team

CircularTriangle06
Posts: 32
Joined: Sat May 23, 2015 5:54 pm

Re: Blog: Google SoC lwIP Report Week 10

Post by CircularTriangle06 »

Getting error loading the driver (compiled with GCC) :|
[ external image ]
hbelusca
Developer
Posts: 1204
Joined: Sat Dec 26, 2009 10:36 pm
Location: Zagreb, Croatia

Re: Blog: Google SoC lwIP Report Week 10

Post by hbelusca »

Hi, please take a backtrace (command: bt at the kernel debugger prompt), press enter, then take screenshots :) Thanks in advance! :D :D
Z98
Release Engineer
Posts: 3379
Joined: Tue May 02, 2006 8:16 pm
Contact:

Re: Blog: Google SoC lwIP Report Week 10

Post by Z98 »

I don't think Dian's bothered building with GCC since he started using windbg in order to debug the driver.
ctasan
Posts: 392
Joined: Mon Feb 24, 2014 7:02 pm
Location: Istanbul, Turkey
Contact:

Re: Blog: Google SoC lwIP Report Week 10

Post by ctasan »

Last commit says "mostly fully working TCP". Is multi-threading and thread safety work finished? What needs to be done further?
ReactOS Community Changelogs | Admin of ReactOS Official Twitter Account | Admin of ReactOS Turkish Twitter Account @reactos_tr
PurpleGurl
Posts: 1790
Joined: Fri Aug 07, 2009 5:11 am
Location: USA

Re: Blog: Google SoC lwIP Report Week 10

Post by PurpleGurl »

To be honest, we might need a bounty to get someone to build our own IP stack and all. I mean, everyone keeps writing wrappers for LWIP and everything gets bugged because it seems we are using LWIP in ways it was not intended to be used. It wasn't intended for multi-threaded applications. If all the time that was spent on getting it to work for us were spent on making our own network stack and drivers, I think we'd be better off by now. Really, if someone knows what they are doing, they could rewrite parts of LWIP to be thread safe, but it sounds like, from what I read, that writing from scratch would have been easier.
zhu48
Posts: 18
Joined: Fri Jun 03, 2016 5:28 pm

Re: Blog: Google SoC lwIP Report Week 10

Post by zhu48 »

I've been going through a lot of deadlock fixes for the last two days. I introduced new ones in the latest commit that's on SVN, but the one before that one actually works quite well. The latest commit was an attempt to fix cancellation issues. I've been working since then, and this issue you're seeing may or may not have been fixed already.
saulius2
Posts: 18
Joined: Sat Jan 17, 2009 4:01 pm

Re: Blog: Google SoC lwIP Report Week 10

Post by saulius2 »

PurpleGurl wrote:I mean, everyone keeps writing wrappers for LWIP and everything gets bugged
Can you give references so I could count these people?
PurpleGurl wrote:because it seems we are using LWIP in ways it was not intended to be used. It wasn't intended for multi-threaded applications. If all the time that was spent on getting it to work for us were spent on making our own network stack and drivers, I think we'd be better off by now.
Well, the fundamental tasks (of parallelism) would be essentially the same, I guess. Thus IMO trying lwIP is quite a way to know what ReOS really needs at the core of the stack.
PurpleGurl wrote:Really, if someone knows what they are doing, they could rewrite parts of LWIP to be thread safe, but it sounds like, from what I read, that writing from scratch would have been easier.
Or maybe just use another stack which is more parallel-friendly, m:)?

See below, please.
saulius2
Posts: 18
Joined: Sat Jan 17, 2009 4:01 pm

Re: Blog: Google SoC lwIP Report Week 10

Post by saulius2 »

Some weeks ago, being inspired by the experiments and blog entries of zhu48, I decided to search for alternatives of lwIP to see how do they deal with the same task. I found several stacks -- mostly targeted at embedded systems and being only partial or very minimalistic, though.

One of them caught my sight, however -- it's called mTCP. Although it's targeted at multiCPUs ..:
This work presents mTCP, a high-performance userlevel TCP stack for multicore systems.

... it seems to know and handle parallelism from the beginning:
mTCP addresses the inefficiencies from the ground up—from packet I/O and TCP connection management to the application interface. In addition to adopting well-known techniques, our design
(1) translates multiple expensive system calls into a single shared memory reference,
(2) allows efficient flowlevel event aggregation, and
(3) performs batched packet I/O for high I/O efficiency.

Our evaluations on an 8-core machine showed that mTCP improves the performance of small message transactions by a factor of 25 compared to the latest Linux TCP stack [...] It also improves the performance of various popular applications by 33% to 320% compared to those on the Linux stack.
The source code: https://github.com/eunyoung14/mtcp

Of course, the project seems to be quite large -- as of July 2016 it contained around 2.4 MLoC. Though in March 2015 it contained only 400 kLoC. So this already compares to lwIP which contained around 150 kLoC as of June 2016.

And then it requries what I suppose to be some high performance libraries -- namely libps or libnuma:
We require the following libraries to run mTCP.
- libps (PacketShader I/O engine library) OR libdpdk (Intel's DPDK package*)
- libnuma
- libpthread
- librt
Thus it may be an overkill or even unacceptable for ReactOS.

But OTOH may it be that it would help only reading it's code / the article to see how does mTCP do it's synchronization things?
I address this question to zhu48 and maybe his mentor Art Yerkes (primarily).
zhu48
Posts: 18
Joined: Fri Jun 03, 2016 5:28 pm

Re: Blog: Google SoC lwIP Report Week 10

Post by zhu48 »

Interesting. I will take a look at this thing, see how it compares.
Something to note, lwIP 2.0.0 was recently released with better multithreading support in its netconn API. It is, however, still a big global lock approacand not designed from the ground up to be thread safe.
PurpleGurl
Posts: 1790
Joined: Fri Aug 07, 2009 5:11 am
Location: USA

Re: Blog: Google SoC lwIP Report Week 10

Post by PurpleGurl »

I didn't literally mean "everyone," just the ROS devs who worked with it. My point is that it would likely be easier to write the stack and drivers from scratch. I'm not a references or sources person.

I too got to researching, and found that this is a problem for others who use it. They tried to make it platform independent and thus left out this sort of handling, leaving that for those who actually use their code.

Now, mTCP sounds good, but it is likely too resource heavy for us and may rely on code we may not want to include. That said, it may be worth studying it.
Last edited by PurpleGurl on Wed Aug 17, 2016 8:26 pm, edited 1 time in total.
raijinzrael
Posts: 10
Joined: Thu Jul 19, 2012 2:39 pm

Re: Blog: Google SoC lwIP Report Week 10

Post by raijinzrael »

saulius2 wrote:Some weeks ago, being inspired by the experiments and blog entries of zhu48, I decided to search for alternatives of lwIP to see how do they deal with the same task. I found several stacks -- mostly targeted at embedded systems and being only partial or very minimalistic, though.

One of them caught my sight, however -- it's called mTCP...
In this paper i can read clearly 2 things.

-Is an user mode library :-/
-Is focused in *nix... again :-/

So is a no-No... There's a reason why LWIP was choosen over *nix alternatives like OSkit BSD TCPIP stack (ROS used to have OSKit as TCPIP stack before it was ditched in favor of LWIP, and even MS ditched it in favor their own). Simply speaking... Stuff with *nix architecture in mind doesn't play well with NT, as it is, and maintaining wrapper patches for these things is a burden in the best cases, a nightmare in the worst cases... At least LWIP is somewhat platform agnostic.

So, as long as there are a better, more NT friendly alternative, or someone want to jump and write an TCPIP stack tailored better to the NT NDIS model, just like MS did, LWIP seems the most sane choice, even with its bundled problems.
zhu48
Posts: 18
Joined: Fri Jun 03, 2016 5:28 pm

Re: Blog: Google SoC lwIP Report Week 10

Post by zhu48 »

mTCP looks like a highly involved and highly optimized TCP implementation targeted towards POSIX systems. To integrate it transparently into ReactOS as the primary TCP driver seems to me like a larger task than doing our own implementation of their optimization techniques. Many of mTCP's optimizations involve CPU core locality and batched I/O and system calls. As such, mTCP would likely include code that replaces functionality implemented in AFD/TDI, not just the low level TCP protocol driver that interfaces with AFD/TDI. It may even involve changes to the kernel itself. The specific focus towards NUMA and use of POSIX threads, I imagine, are big problems (overkill for NUMA, compatibility for pThreads) for integrating into ReactOS. Even if we can't directly use the code, it's probably worth reading through as a reference though.
saulius2
Posts: 18
Joined: Sat Jan 17, 2009 4:01 pm

Re: Blog: Google SoC lwIP Report Week 10

Post by saulius2 »

PurpleGurl wrote:I didn't literally mean "everyone," just the ROS devs who worked with it. My point is that it would likely be easier to write the stack and drivers from scratch. I'm not a references or sources person.
PurpleGurl, I didn't either. My idea was to find out responses from previous tries. Knowing count of it would help to build a general picture. But maybe Google will answer me. So far I found only posts from Cameron Gutman circa 2011.

raijinzrael and zhu48, thank you for insights. Had I be a ReOS veteran user, maybe I could have known the answers already. Some more questions follow.
raijinzrael wrote:In this paper i can read clearly 2 things.

-Is an user mode library :-/
-Is focused in *nix... again :-/

[...] Simply speaking... Stuff with *nix architecture in mind doesn't play well with NT [...] At least LWIP is somewhat platform agnostic.
OK, I easily accept the second issue.
But how much is the first one relevant to ReactOS?
zhu48 wrote:mTCP looks like [...] implementation targeted towards POSIX systems. To integrate it transparently into ReactOS as the primary TCP driver seems to me like a larger task than doing our own implementation of their optimization techniques. Many of mTCP's optimizations involve CPU core locality and batched I/O and system calls. As such, mTCP would likely include code that replaces functionality implemented in AFD/TDI, not just the low level TCP protocol driver that interfaces with AFD/TDI. It may even involve changes to the kernel itself.
zhu48, thanks for linking some details to others. The picture gets clearer.
zhu48 wrote:The specific focus towards NUMA and use of POSIX threads, I imagine, are big problems (overkill for NUMA, compatibility for pThreads) for integrating into ReactOS.
Right, this is why I tried to start by mentioning NUMA as an overkill.
But I didn't think about pthreads. I thought mTCP implements own synchronization of multitreadead / parallel operations. Now I guess they use only phtreads to do that.
zhu48 wrote:Even if we can't directly use the code, it's probably worth reading through as a reference though.
I tried to accent namely this -- maybe you or someone else could get some nice ideas for handling concurency. Eg. the use of Shared Memory seemed to me handy in this aspect. I am sorry if I failed.

Wish you best luck in bringing clarity on the subject! And have a great weekend:)
Post Reply

Who is online

Users browsing this forum: No registered users and 52 guests