Jump to content
Eternal Lands Official Forums
Florian

malloc is evil

Recommended Posts

There are a few caveats to ponder first when considering using alloca:

  • Obviously, there can't be many places in EL where memory is allocated for use and freed within a single scope. In fact, the only examples I can think of are creating strings for filenames, which could quite simply use variable-length arrays - however, last time I checked (a *long* time ago), EL was only using C89-compliant features.
  • It's not standard ANSI C of any variety, though GCC, MSVC (via _alloca_s) and tcc (x86 only?) support it.
  • "Portable" alloca implementations tend to make non-standard assumptions about the underlying architecture, and thus don't offer a guaranteed userspace alternative when the compiler fails to implement it.
  • IIRC the GCC implementation doesn't cleanly return NULL when you allocate too much memory, but segfaults instead.
  • alloca uses the size-limited execution stack, which makes it terrible for use for parsing large temporary sets or in recursive functions.
  • If EL continues slowly transitioning towards C++, scope-local containers will be able to handle most, if not all, of the use cases of alloca cleanly anyway.
  • Most people don't know about alloca, and thus may either a) assume that it's a general-purpose allocator in the EL code or B) need to look it up, raising the barrier-to-entry to the codebase.

PS: What's the deal with the first link, anyway? I know it covers general tenets about optimization, but unless you're calling a function to generate path names in an inner loop somewhere, alloca will not supply a quantifiable performance boost.

 

EDIT: Oh, I see. You perhaps meant to point to http://www.fefe.de/dietlibc/diet.pdf ?

Edited by crusadingknight

Share this post


Link to post
Share on other sites

Unless if we have a lot of dynamic memory allocation, such as in a loop or very often called function (which hopefully we don't), I see no benefit in using that function.

Share this post


Link to post
Share on other sites

Treating the problem of performance by increasing the speed of the memory allocation is not the way to go. If the memory allocations take too much time, then we have to simply limit them.

For the code like the eye candy system that needs a lot of objects, it can be done with object pools that keep the objects in memory and reuse them without destroying/creating them each time. I don't remember how it is done in it though, maybe it already works like that.

Share this post


Link to post
Share on other sites

My point is not to increase memory allocation performance, but to remove/prevent memory leaks :devlish:

 

The 1.7.0 windows client (at least on my machine) increases its memory usage continuously, after some days it has grown from ~150mb after startup to ~550mb. I guess that's because of some memory leak.

Share this post


Link to post
Share on other sites
My point is not to increase memory allocation performance, but to remove/prevent memory leaks :devlish:

 

The 1.7.0 windows client (at least on my machine) increases its memory usage continuously, after some days it has grown from ~150mb after startup to ~550mb. I guess that's because of some memory leak.

alloca won't really help there, since it takes little work to free a chunk of memory at the end of the scope it was allocated it in - less work, in fact, than to change all scope-local memory allocations over to alloca and remove the corresponding calls to free. In my experience, most memory leaks come from manipulating and tearing down with more complex (global) structures rather than from within a single scope. Besides, I don't think EL should be able to leak 400MB of scope-local memory within a few days, unless you're frequently allocating obscenely large transient buffers.

 

Out of curiosity, have you tried compiling with -DMEMORY_DEBUG to see if you can pin down the memory leaks, or at least determine whether they are in sections of code using the C allocators?

Edited by crusadingknight

Share this post


Link to post
Share on other sites

I've been running the same instance of the client constantly since the update (nearly a week now), monitoring the memory usage automatically each hour. This is on Linux. It would seem to me that the data+stack memory usage is pretty stable at around 140MB.

Share this post


Link to post
Share on other sites
I've been running the same instance of the client constantly since the update (nearly a week now), monitoring the memory usage automatically each hour. This is on Linux. It would seem to me that the data+stack memory usage is pretty stable at around 140MB.

How wrong could I be! I continued to run the client for a few more days normal usage with no substantial change. I then embarked on a two continent tour; all the outside maps at least (I was on horseback). The client memory usage increased with each new map - now we should expect this as new textures/objects etc get loaded on map changes. For the most part revisiting a map does not increase memory usage but I did find jumping back and forth between two maps consistently increased memory usage a little. This is a concern as it suggests a memory leak. I will investigate further....

Share this post


Link to post
Share on other sites

Maybe -tcmalloc from the google perftools might help.

I can't use it on MacOSX :(

... but I did find jumping back and forth between two maps consistently increased memory usage a little. This is a concern as it suggests a memory leak. I will investigate further....

That matches my usual behaviour, PL sto <-> PL.

Edited by Florian

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×