Wrote a super simple database ORM/abstraction layer for my BBS.
The idea is that I create a class that maps to each table, run an SQL query against the database, pass the resulting IDataReader to this DbClassBase.Retrieve() method which will return a list of proper classes for me.
I know there's existing ORM packages out there, but I didn't really need to full power or complexity that they provide.
[img]http://ahb.me/OL1[/img]
[QUOTE=Dj-J3;25774224]Join the project, however you wish to say it. :v:[/QUOTE]
I would very much like to help out.
In Win32, Why does a system menu not respond? It shows up but, it won't change when I click on a menu item.
[QUOTE=TVC;25775665]There is already a 2DCraft at [url]http://2dcraft.net/[/url] currently it is written in GM but I think he wants to move it over to Java or XNA.[/QUOTE]
Well fuck, I guess that means he can't do his :downs:
[QUOTE=Siemens;25778430]Wrote a super simple database ORM/abstraction layer for my BBS.
The idea is that I create a class that maps to each table, run an SQL query against the database, pass the resulting IDataReader to this DbClassBase.Retrieve() method which will return a list of proper classes for me.
I know there's existing ORM packages out there, but I didn't really need to full power or complexity that they provide.
[img_thumb][URL]http://ahb.me/OL1[/URL][/img_thumb][/QUOTE]
I hope you delete that exception.
[QUOTE=layla;25768734]Second sampler ready for lightmapping :D
[img_thumb]http://dl.dropbox.com/u/99765/23476231.png[/img_thumb][/QUOTE]
I spy t4
what the fuck mono you prick, it was working 10 fucking minutes ago:
[img]http://ahb.me/OSF[/img]
No, really. It was looking like this 10 minutes ago:
[img]http://ahb.me/OOg[/img]
[QUOTE=Jookia;25781468]I hope you delete that exception.[/QUOTE]
Why? That exception should never be thrown, and if it is, it indicates something's wrong with my code that needs to be fixed.
[editline]1st November 2010[/editline]
automerge :ninja:
[QUOTE=TVC;25775665]There is already a 2DCraft at [url]http://2dcraft.net/[/url] currently it is written in GM but I think he wants to move it over to Java or XNA.[/QUOTE]
1. The name isn't really 100% decided yet, it'll probably change before alpha.
2. This isn't a clone of minecraft, it's rather a demake, and then adding some new features. Lets say the inspiration was from minecraft.
[editline]1st November 2010[/editline]
Anyone who wants to help, add me on steam, or else i'll probably miss it if you write in here. :v:
(Or pm)
[editline]1st November 2010[/editline]
Maybe a brainstorm hour would be fitting some day soon. Great ideas could be made. :buddy:
Also, if someone isn't sure, it's written in C# with SFML.net as renderer and audio.
Sooo.... Finally got everything working. I remembered that OpenTK already has some functions for generating view frustum and rotation matrices so I used that.
I feel less of a badass since I didn't write them myself, but it works, so it evens it out :P
[img]http://lh6.ggpht.com/_ZoH0ul2PquE/TM6VjY1yErI/AAAAAAAAAqk/5fekMBV0Yyk/fe.png[/img]
It renders 150k faces (although the screenshot only shows points) in 30ms, is that too slow? I'm on 9800GT, my friend on HD3870 (which should be less good according to benchmarks) does it in 15ms.
It does sound rather slow, are you using vertex / index buffers ? Also are they all getting rendered in one batch / draw call ?
Well I don't use OpenGL so I had to look it up, but it seems that you are using buffers and a single draw call so I can't see why you wouldn't get better performance.
Could it be a driver-related issue ? Maybe I'm just judging it wrong, but it does seem to be taking longer than it should.
[b]Edit:[/b] Keep in mind that when you enable back-face culling, half of the polygons won't need to be rendered so performance should approximately double. Also you may find that triangles will render quickly than points.
[QUOTE=Darwin226;25782426]Sooo.... Finally got everything working. I remembered that OpenTK already has some functions for generating view frustum and rotation matrices so I used that.
I feel less of a badass since I didn't write them myself, but it works, so it evens it out :P
[img_thumb]http://lh6.ggpht.com/_ZoH0ul2PquE/TM6VjY1yErI/AAAAAAAAAqk/5fekMBV0Yyk/fe.png[/img_thumb]
It renders 150k faces (although the screenshot only shows points) in 30ms, is that too slow? I'm on 9800GT, my friend on HD3870 (which should be less good according to benchmarks) does it in 15ms.[/QUOTE]
These stats mean [i]absolutely nothing[/i] at this point. You should chill out and stop worrying about numbers. Good work though.
Thanks, I know that no game will have this high poly models or anything close to this. It's not an issue.
But if I'm doing something wrong at this point, I'd like to fix it as soon as I can.
Could you send me the model so I can try it in my render framework? I feel like doing some benchmarking of my own.
[url]http://www.oyonale.com/modeles.php?lang=en&page=43[/url]
there you go
Well Darwin, looking up uniform locations multiple times every time you draw is a little silly, considering they don't change provided you don't change your shader.
Not sure whether you're looking up a hash map or getting the location from gl, but it's a bit silly to duplicate that code at any rate.
Also your VertexAttribPointer calls only need to happen once too, they don't require re-setting every draw. Actually you don't need to bind the buffer again at all.
something like this once...
[code]
/* VBO */
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), &verts[0], GL_STATIC_DRAW);
glVertexAttribPointer(vertex_loc, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), offsetof(Vertex, position));
glVertexAttribPointer(texcoord_loc, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), offsetof(Vertex, texcoord));
glVertexAttribPointer(colour_loc, 4, GL_UNSIGNED_BYTE, GL_FALSE, sizeof(Vertex), offsetof(Vertex, colour));
glBindBuffer(GL_ARRAY_BUFFER, 0);
[/code]
and then draw like...
[code]
glEnableVertexAttribArray(vertex_loc);
glEnableVertexAttribArray(texcoord_loc);
glEnableVertexAttribArray(colour_loc);
glDrawArrays(GL_QUADS, 0, 4);
glDisableVertexAttribArray(vertex_loc);
glDisableVertexAttribArray(texcoord_loc);
glDisableVertexAttribArray(colour_loc);
[/code]
[editline]1st November 2010[/editline]
Note too I'm using some deprecated stuff like GL_QUADS because I'm lazy, but you should get the point.
[editline]1st November 2010[/editline]
Oh and note my use of offsetof(Vertex, position), which is probably a good idea since at the moment you're ignoring any structure padding the compiler might do, provided of course that your data is in a vertex structure that is, rather than manually interleaved.
Well, every mesh has it's own vertex and element buffers, and since I don't know if the mesh changed since the last draw call (which it always does unless there's only one mesh to draw) I need to make sure the correct buffer is bound.
And program.Locations is a Dictionary<string, int>
Oh and note my use of offsetof(Vertex, position), which is probably a good idea since at the moment you're ignoring any structure padding the compiler might do, provided of course that your data is in a vertex structure that is, rather than manually interleaved.
[editline]1st November 2010[/editline]
[QUOTE=Darwin226;25782861]Well, every mesh has it's own vertex and element buffers, and since I don't know if the mesh changed since the last draw call (which it always does unless there's only one mesh to draw) I need to make sure the correct buffer is bound.
And program.Locations is a Dictionary<string, int>[/QUOTE]
You don't understand, the currently bound buffer has no relevance to the draw call.
[QUOTE=blankthemuffin;25782863]Oh and note my use of offsetof(Vertex, position), which is probably a good idea since at the moment you're ignoring any structure padding the compiler might do, provided of course that your data is in a vertex structure that is, rather than manually interleaved.[/QUOTE]
It's manually interleaved. I only use the Vertex structure when setting up the mesh, then I convert the vertices into a float array when generating buffers.
[QUOTE=blankthemuffin;25782863]
You don't understand, the currently bound buffer has no relevance to the draw call.[/QUOTE]
Right, I don't understand.
How does GL know what mesh to render then?
Btw, I don't think I saw you for quite some time here. If it's not just me, welcome back.
Sorry, I forgot to tell you a bit relevant to your usage.
Vertex Array Objects
They basically store all your glVertexAttribPointer etc calls in a pretty object that you can bind before drawing.
Pretty sure it's a 3.x addition but the extension may be available for 2.x
What defines the OGL version I'm using?
There's definitely something wrong on your end. (I'm on a 4890 HD)
[img]http://www.imgdumper.nl/uploads3/4ccea7d3ae2ca/4ccea7d3a235b-lolfps.png[/img]
[QUOTE=Darwin226;25783037]What defines the OGL version I'm using?[/QUOTE]
Your graphics card, your graphics drivers, your context creation library, and what you ask for. Cards >= the radeon HD series? (someone confirm) and >= 8xxx series GeForce cards support 3.3 with appropriate drivers, below that you'll get 2.x with whatever extensions.
[editline]1st November 2010[/editline]
[url]http://www.opengl.org/registry/specs/ARB/vertex_array_object.txt[/url] should be supported pretty well I think.
[editline]1st November 2010[/editline]
[url]http://www.opengl.org/registry/specs/ARB/vertex_array_object.txt[/url] should be supported pretty well I think.
I have no problem with using 3.x and unless I'm mistaken OpenTK supports it.
My 9800GT also does. I'll look into VAO.
Overv what do you think the problem is?
[editline]1st November 2010[/editline]
Ok this is confusing as hell...
OpenTK has some extra options when making a GameWindow one of which is Graphic context flags.
It was set to debug, I changed it to forward compatible and now it renders in < 1 ms.
[editline]1st November 2010[/editline]
Ok, kinda spamming now but I'm currently trying to implement frametime interdependency or what ever you call it.
I've got this:
[cpp] Stopwatch timer = new Stopwatch();
double overtime = 0;
while ( true ) {
timer.Start();
ProcessEvents( true );
HandleConsole();
Render();
timer.Stop();
overtime += timer.ElapsedTicks / (double)Stopwatch.Frequency * 1000;
timer.Reset();
while ( overtime > 16 ) {
overtime -= 16;
Update();
}
if ( Kill ) {
Close();
return;
}
}[/cpp]
Does that seem right? (I'm mainly asking Jallen)
Also, my graphics card doesn't seem to like that "render as fast as you can" think. It starts heating up and the fan spins like crazy when I run it. Would it be a good idea to cap it to something like 250FPS?
[editline]1st November 2010[/editline]
Ok, kinda spamming now but I'm currently trying to implement frametime interdependency or what ever you call it.
I've got this:
[cpp] Stopwatch timer = new Stopwatch();
double overtime = 0;
while ( true ) {
timer.Start();
ProcessEvents( true );
HandleConsole();
Render();
timer.Stop();
overtime += timer.ElapsedTicks / (double)Stopwatch.Frequency * 1000;
timer.Reset();
while ( overtime > 16 ) {
overtime -= 16;
Update();
}
if ( Kill ) {
Close();
return;
}
}[/cpp]
Does that seem right? (I'm mainly asking Jallen)
Also, my graphics card doesn't seem to like that "render as fast as you can" think. It starts heating up and the fan spins like crazy when I run it. Would it be a good idea to cap it to something like 250FPS?
[QUOTE=Darwin226;25783103]Does that seem right? (I'm mainly asking Jallen)[/QUOTE]
I think you should be including the time spent updating when measuring how much time has passed between updates, as currently you only measure render time. For example if render time took 50ms and updating took 30ms, the next time it comes around to update you are recording that 50ms has passed, when 80ms has passed between each update.
Also if you are not interpolating positions when rendering, there wont be much point in rendering more often than you update, because you will render the same frame each time.
Another thing you might want to consider is processing updates upto a maximum step, so in this example if your time elapsed is greater than 16ms, divide the step into updates of max size 16ms, then process the remainder as a smaller update step. This will allow you to process updates more frequently when the game is running faster than 16ms/frame, while avoiding the problems with large timesteps.
I'm working on turning a class into an executable jar file. I can't seem to figure it out.
I'd work a little on 2Dcraft, although I'm also very busy.
I'm also mainly a C++ guy, but I'm learning C#
Sorry, you need to Log In to post a reply to this thread.