Jump to content
Eternal Lands Official Forums
Fedora

1.5 vs 1.6 graphic engine & Mac

Recommended Posts

I think some people had ATI Macs and the vertex programs still didn't work..

All actors completely black, but well animated.

Maybe there's some 16bit problem in the shaders, AFAIK Apple's OpenGL can only handle 32bit correctly.

Share this post


Link to post
Share on other sites
I think some people had ATI Macs and the vertex programs still didn't work..

Yes, I have a OpenGL 2.0 supported system, and vertex programs do not work properly. To be more accurate, Eternal Lands' vertex programs do not work. Apple's GLSLShowpiece sample code compiles and runs just fine. My point was that if you want Macs (which includes the latest MacBook) to continue to be able to run EL, concessions will have to be made.

Share this post


Link to post
Share on other sites

Profiled both clients:

 

1.5: most expensive OpenGL call takes 0.3% of total time spread over cal_render_actor, draw_2d_object, draw_3d_objects /0.2%)

 

1.6: draw_3d_object triggers OpenGl calls (one of which ends with gldDestroyQuery :icon13: ) taking up to 4.5% of total time

 

that's a lot. any idea?

Share this post


Link to post
Share on other sites

It's not only related to Macs. In most cases desktop computers can be upgraded to get a better video card but the laptop users are usally stuck with their old video chips. In some cases even rather new laptops (bought like 2 or 3yrs ago) do have crappy graphics cards and I think it's a bad idea to lock those out.

 

If the vertex shaders will be required in future builds and regarding the current vertex program the actual required OpenGL version will not be 1.4 (which provides the ARB_vertex_programs) but 1.5 or higher because the current vertex programs are using more variables and instructions than what is guaranteed to be available in OpenGL 1.4.

 

I'm in the lucky position to buy a new laptop once my current gets old but there are many people out there who can't by a new laptop every 3 or 4 years so maintaining a poorman setting is important for them.

Share this post


Link to post
Share on other sites
1.6: draw_3d_object triggers OpenGl calls (one of which ends with gldDestroyQuery :icon13: ) taking up to 4.5% of total time

 

that's a lot. any idea?

 

This one is a bit old but might still apply.

 

http://lists.apple.com/archives/mac-opengl...b/msg00115.html

 

In the discussion there are several issues raised. One is excessive memcpy'ing of textures and the second is the pixelformat for textures which prevents DMA upload: http://lists.apple.com/archives/mac-opengl...b/msg00121.html

Share this post


Link to post
Share on other sites
This one is a bit old but might still apply.

 

http://lists.apple.com/archives/mac-opengl...b/msg00115.html

 

In the discussion there are several issues raised. One is excessive memcpy'ing of textures and the second is the pixelformat for textures which prevents DMA upload: http://lists.apple.com/archives/mac-opengl...b/msg00121.html

 

Tnx! Seems really pertinent...is there a chance to implement those suggestions (i really dont understand them) and try them out before the update? :icon13:

Share this post


Link to post
Share on other sites
1.6: draw_3d_object triggers OpenGl calls (one of which ends with gldDestroyQuery :icon13: ) taking up to 4.5% of total time

 

that's a lot. any idea?

 

This one is a bit old but might still apply.

 

http://lists.apple.com/archives/mac-opengl...b/msg00115.html

 

In the discussion there are several issues raised. One is excessive memcpy'ing of textures and the second is the pixelformat for textures which prevents DMA upload: http://lists.apple.com/archives/mac-opengl...b/msg00121.html

Arf... have found exactly the same post and was up to post it as well. :D

BTW, they also speak about the apple caching system that stores all and this is maybe why your hard disk is spinning Fedora...

Share this post


Link to post
Share on other sites
BTW, they also speak about the apple caching system that stores all and this is maybe why your hard disk is spinning Fedora...

 

so with more ram it should go away?

 

and:

 

- memory monitor shows lot of inactive (previoiusly claimed and now free) memory, so why paging?

- are we using (much) more textures or using them in a different way in 1.6 (with my poor man settings)?

Share this post


Link to post
Share on other sites
so with more ram it should go away?

Wouldn't it be better to disable the caching for EL if it's possible?

 

 

mmm...dont think i can disable it for a single process (can i?). And btw, isnt the paging due to the fact that not all textures can fit in my video memory (so they are put back and forth to system memory every frame)? Even more ram shouldnt do the job, i guess...I'm getting confused... :icon13:

Edited by Fedora

Share this post


Link to post
Share on other sites
mmm...dont think i can disable it for a single process (can i?). And btw, isnt the paging due to the fact that not all textures can fit in my video memory (so they are put back and forth to system memory every frame)? Even more ram shouldnt do the job, i guess...I'm getting confused... :icon13:

I was not speaking of paging. Actually, even with all the new animations, the client should not take more than 200 MB.

 

I'm not a mac expert so maybe I haven't understand what they said but I know that mac OS X has a system that is indexing all the documents you are working on in order to do some queries on them very fast. And it seems that this stuff also tries to index the textures and such things used by the client...

Share this post


Link to post
Share on other sites
GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV

These format and data type combinations also provide acceptable performance

...

The combination GL_RGBA and GL_UNSIGNED_BYTE needs to be swizzled by many cards when the data is loaded, so it's not recommended.

 

This could be achieved by creating a separate texture buffer besides texture_struct::texture as

 

	tex->texture_bgra = (GLuint*) malloc(texture_width * texture_height * 4 * sizeof(GLuint));

 

and converting the texture data upon loading or right after changing (like night color adjustment)

 

		// RGBA -> BGRA conversion
	for (int i = 0; i < texture_width * texture_height; i++)
	{
		  tex->texture_bgra[i * 4 + 0] = tex->texture[i * 4 + 2];  // B
		  tex->texture_bgra[i * 4 + 1] = tex->texture[i * 4 + 1];  // G
		  tex->texture_bgra[i * 4 + 2] = tex->texture[i * 4 + 0];  // R
		  tex->texture_bgra[i * 4 + 3] = tex->texture[i * 4 + 3];  // A
	}

 

an then using GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV in all calls to glTexImage2D

 

		if (tex->texture_bgra)
			glTexImage2D(GL_TEXTURE_2D,0,GL_BGRA,x_size, y_size,0,GL_BGRA,GL_UNSIGNED_INT,tex->texture_bgra);
	else
			glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,x_size, y_size,0,GL_RGBA,GL_UNSIGNED_BYTE,tex->texture);

 

 

Although this has the downside of additional memory usage. This would required 5 times the space for in memory textures as the 4*8bit texture plus the 4*32bit texture is held in memory.

 

EDIT: Just a thought. Maybe the Mac selection problems are caused by the different color space.

Edited by ago

Share this post


Link to post
Share on other sites
I think some people had ATI Macs and the vertex programs still didn't work..

Yes, I have a OpenGL 2.0 supported system, and vertex programs do not work properly. To be more accurate, Eternal Lands' vertex programs do not work. Apple's GLSLShowpiece sample code compiles and runs just fine. My point was that if you want Macs (which includes the latest MacBook) to continue to be able to run EL, concessions will have to be made.

 

Well, there will be some concessions, as in the game will still work without vertex programs, but the quality will suck.

As for the GLSL samples, we are not using GLSL for the actors, we are using vertex programs.

Share this post


Link to post
Share on other sites

let's sum up.

 

My tests tell that:

- a little more time is spent drawing 3d objetcs. I say little because i have a 2-3% while in the articles above it is in the order of 20-25%.

- I have 12Mb of free VRam even with 2 clients running

- OpenGL monitor reports no difference in swapping between 1.5 and 1.6

- textures are managed the same way in both clients and 1.5 is fine

 

...so, is it possible that this isnt a graphic related issue? client 1.6 is accessing disk somewhere? is there a way to checkout the code as it was at the beginning of January?

 

tnx.

Share this post


Link to post
Share on other sites
...so, is it possible that this isnt a graphic related issue? client 1.6 is accessing disk somewhere? is there a way to checkout the code as it was at the beginning of January?

I'm logging a lot of debugging stuff with the missiles but if you also have problems without using that, it's not related.

To check an older version, just use the -D option with CVS. For example: cvs up -d -D "2008-01-01" to update to the version from January 1st.

Share this post


Link to post
Share on other sites

Found!! :P

 

well...almost...if i comment out the call to display_blended_objecs() (gamewin.c, 1134) the spinning goes away and the client works again!

 

Now...why? and most important...can i keep it commented or am i losing something more than a bit of eye candy (actually not meaning the eye_candy fx)?

 

EDIT:

Oh my, oh my...display_blended_objects doesnt draw a single object (few printf added, tested at votd sto)...but i get 30 stable fps without it, using vertex buffer object works again...even fog and reflection...totally puzzled

Edited by Fedora

Share this post


Link to post
Share on other sites

I watched use_animation_program in gdb today.

 

First it gets set to 0 by

Old value = 1

New value = 0

check_option_var (name=0x1d65a8 "use_animation_program") at elconfig.c:1247

1247 our_vars.var->func (our_vars.var->var);

(gdb) bt

#0 check_option_var (name=0x1d65a8 "use_animation_program") at elconfig.c:1247

#1 0x00049afe in check_options () at elconfig.c:1275

#2 0x000677e7 in init_video () at gl_init.c:505

#3 0x00072fba in init_stuff () at init.c:640

 

Then this:

Old value = 0

New value = 1

0x000479fe in change_use_animation_program (var=0x23f03c) at elconfig.c:277

277 *var = 1;

(gdb) bt

#0 0x000479fe in change_use_animation_program (var=0x23f03c) at elconfig.c:277

#1 0x000499c0 in check_option_var (name=0x1d65a8 "use_animation_program") at elconfig.c:1247

#2 0x00049afe in check_options () at elconfig.c:1275

#3 0x000677e7 in init_video () at gl_init.c:505

#4 0x00072fba in init_stuff () at init.c:640

 

And again:

Old value = 1

New value = 0

check_option_var (name=0x1d65ac "use_animation_program") at elconfig.c:1247

1247 our_vars.var->func (our_vars.var->var);

(gdb) bt

#0 check_option_var (name=0x1d65ac "use_animation_program") at elconfig.c:1247

#1 0x00049afe in check_options () at elconfig.c:1275

#2 0x0007311b in init_stuff () at init.c:685

 

When I comment out the second call to check_options() in init.c I get UVP: 1 as expected, but no actor is drawn (no players, no monsters).

Banners are there!

RC1 Windows data files.

 

/EDIT infos.log:

Init extensions.
GL_MAX_TEXTURE_UNITS_ARB: 8
GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT: 16.000000
Generating 3D noise: octave 1/4...
Generating 3D noise: octave 2/4...
Generating 3D noise: octave 3/4...
Generating 3D noise: octave 4/4...
Compressing noise
Building noise texture
Done with noise
Filter lookup texture
Compiling shader './shaders/water_fs.glsl' successful
Linking shaders successful
Compiling shader './shaders/water_fs.glsl' successful
Linking shaders successful
Compiling shader './shaders/reflectiv_water_fs.glsl' successful
Linking shaders successful
Compiling shader './shaders/reflectiv_water_fs.glsl' successful
Linking shaders successful
Compiling shader './shaders/water_fs.glsl' successful
Linking shaders successful
Compiling shader './shaders/water_fs.glsl' successful
Linking shaders successful
Compiling shader './shaders/reflectiv_water_fs.glsl' successful
Linking shaders successful
Compiling shader './shaders/reflectiv_water_fs.glsl' successful
Linking shaders successful
Init extensions done
Init eyecandy
Init eyecandy done
Init actor defs
Build vertex buffers for 'human female'
Build vertex buffers for 'human female' done
Build vertex buffers for 'human male'
Build vertex buffers for 'human male' done
Build vertex buffers for 'elf female'
Build vertex buffers for 'elf female' done
Build vertex buffers for 'elf male'
Build vertex buffers for 'elf male' done
Build vertex buffers for 'dwarf female'
Build vertex buffers for 'dwarf female' done
... (all successful)
Build vertex buffers for 'chinstrap_penguin'
Build vertex buffers for 'chinstrap_penguin' done
Build vertex buffers for 'gentoo_penguin'
Build vertex buffers for 'gentoo_penguin' done
Build vertex buffers for 'king_penguin'
Build vertex buffers for 'king_penguin' done
Init actor defs done
Init lights
Init done
Init done!

 

2nd /EDIT

none of the options in the debug tab seem to do anything, except the top one which disables also the banners.

 

3rd /EDIT:

error_log:

[21:55:16] EXTENDED EXCEPTION(5:opengl_error): invalid operation in render_mesh_shader at actor_init.cpp (line 346)

Last message repeated 2921 times

 

4th /EDIT

From the glDrawElements manpage

GL_INVALID_OPERATION is generated if glDrawElements is executed between the execution of glBegin and the corresponding glEnd.

Edited by Florian

Share this post


Link to post
Share on other sites
Found!! :hiya:

 

well...almost...if i comment out the call to display_blended_objecs() (gamewin.c, 1134) the spinning goes away and the client works again!

 

Now...why? and most important...can i keep it commented or am i losing something more than a bit of eye candy (actually not meaning the eye_candy fx)?

 

EDIT:

Oh my, oh my...display_blended_objects doesnt draw a single object (few printf added, tested at votd sto)...but i get 30 stable fps without it, using vertex buffer object works again...even fog and reflection...totally puzzled

 

Maybe because blended objects are only used in some maps?

Share this post


Link to post
Share on other sites

ok, sorry... now i found what is slowing down my client and it is the new selection. With it enabled i cant click anything but fps is steady 30, without it i get the spinning. I'm going to look into it soon (and it isnt the mouse sensivity, already checked).

 

And for the ones wondering why i thought it was the display_blended...that's because i tested the new client for the first time on the main, where new_selection is on by default :P

 

EDIT:

the guilty line is "glReadPixels(mouse_x, window_height - mouse_y, 1, 1, GL_RGB, GL_BYTE, &pixels);" (select.cpp, 190)

any idea why?

 

EDIT2:

it seems that GL_RGBA gives better performance. I tried also GL_UNSIGNED_BYTE but selection gets messed up. Btw, any idea why the pixels is a char[16] and why GL_RGB is used? And where is the back buffer setup? Maybe the new_selection needs an ad hoc setup for it on my machine...(most NPCs are clickable and i can interact with myself)

Edited by Fedora

Share this post


Link to post
Share on other sites

The backbuffer is created with the OpenGL initialisation and is used for normal drawing too. It renders to the backbuffer which is then swapped for the frontbuffer to provide flickerfree doublebuffer rendering.

 

As stated before the buffer seems to have a native color space of BGRA. You could try to set this for testing. You won't be able to interact but at least you can try if the performance gets better. Or you do the color conversion yourself and swap the pixel values right after the glReadPixels call.

Share this post


Link to post
Share on other sites

I looked into the new selection code (headache...), here is what happens on my machine:

 

- 3dobjects are selectable, no problems with them.

- ONLY 1 actor is selectable at a given time, and more precisely the first actor drawn in the back buffer which usually is my char, but sometimes can be an NPC (hence their random clickability). To prove this, i reversed the loop that draws actors ( select.cpp, ~330) and indeed only the first drawn actor is selectable.

- for unselectable actors the glReadPixel returns a buffer of greys (r=g=b and a=255).

 

Any idea what's going wrong? Maybe some flag is unset during actor drawing...(and maybe we have found the new selection bug for same machines)

Share this post


Link to post
Share on other sites

Solved!!! :bow_arrow:

here is what i did (select.cpp, ~376):

 

Index: select.cpp
===================================================================
RCS file: /cvsroot/elc/elc/select.cpp,v
retrieving revision 1.18
diff -a -u -r1.18 select.cpp
--- select.cpp	11 Mar 2008 21:58:04 -0000	1.18
+++ select.cpp	22 Mar 2008 18:40:23 -0000
@@ -350,8 +350,12 @@
								  glDisable(GL_TEXTURE_2D);
							}
							glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, colorf);
+							glPushAttrib(GL_ALL_ATTRIB_BITS);
+							glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS);
							draw_actor_without_banner(actors_list[selections[i].id],
-								0, actors_list[selections[i].id]->has_alpha, 0);
+													  0, actors_list[selections[i].id]->has_alpha, 0);
+							glPopClientAttrib();
+							glPopAttrib();
						}
						break;
					case UNDER_MOUSE_3D_OBJ:

 

i save the OpenGL state before drawing every actor (if the first works...) and now i have new_selection working!

 

 

EDIT

OT: shouldnt the actor_list be locked before accessing it?

Edited by Fedora

Share this post


Link to post
Share on other sites

I just commited a single-line-patch.

Enable SDL V-SYNC on OSX.

That completely removes tearing and jerky movements for me, with and w/o NEW_CAMERA.

 

#ifdef OSX
// enable V-SYNC
SDL_GL_SetAttribute( SDL_GL_SWAP_CONTROL, 1 );
#endif

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

×