@paindoc do you have any experience with writing to shader buffer storage from within shaders?
I've created a small test case based on what we talked about before.
I've created 2 identical buffers holding the indirect draw call parameters for a list of models to render.
I then use the first one as an indirect buffer, and the second one as an SSBO.
Using a test shader I try to just "zero-out" all of the data in the ssbo from the fragment shader, doing no reads on it whatsoever, as a means of testing writing to an ssbo from a shader.
I then swap that second buffer from an SSBO to an indirect buffer, and render the models again.
The result: its as if none of my writes to the ssbo did anything, because all of the models render.
Expected result: fragment shader should have zero'd everything out, no triangles should render.
code snippet:
// bind framebuffer, bind VAO, bind accessory states, etc
cullingShader->bind();
glDisable(GL_DEPTH_TEST); // fbo is empty, just want to test successfully rendering all objects in VAO
glDepthMask(GL_FALSE); // disable actually writing anything to color/depth buffers
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
m_indirectGeo.bindBuffer(GL_DRAW_INDIRECT_BUFFER); // draw parameters
m_indirectGeo2.bindBufferBase(GL_SHADER_STORAGE_BUFFER, 6); // copy of draw parameters, to be modified in shader
glMultiDrawArraysIndirect(GL_TRIANGLES, 0, size, 0);
// Draw models normally
geometryShader->bind();
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 6, 0);
m_indirectGeo2.bindBuffer(GL_DRAW_INDIRECT_BUFFER); // The modified ssbo
glMultiDrawArraysIndirect(GL_TRIANGLES, 0, size, 0);
My test shader doesn't do any culling yet, but just tests writing to the ssbo.
Its vertex shader is normal. The fragment shader:
#version 460
layout (location = 0) out vec4 FirstTexture;
struct Draw_Struct {
uint count;
uint instanceCount;
uint first;
uint baseInstance;
};
layout (std430, binding = 6) buffer Draw_Buffer {
Draw_Struct buffers[];
};
layout (location = 0) flat in uint id;
void main()
{
buffers[id].count = uint(0);
buffers[id].instanceCount = uint(0);
buffers[id].first = uint(0);
buffers[id].baseInstance = uint(0);
FirstTexture = vec4(0);
}
I mean if you're fully disabling the depth tests before your shader, and you have a sparse enough test shader it might just optimize everything out? Not quite sure why this isn't working. OpenGL upsets me a ton still >.>
I think this is going to be a case of you having to use memory barriers, though. Did you look at the occlusion culling sample code from Nvidia? It seems to be exactly what you're doing and they use memory barriers a fair amount. I know I'd have to use them in Vulkan, otherwise the driver/implementation can reorder or parallelize the actual execution of my drawcalls pretty freely. The nvidia example is pretty complex but it definitely shows how to handle making sure writes to shader storage buffers are visible to various stages of the pipeline, and to make sure some explicit ordering of your drawcalls is guaranteed
Oh okay. I guess that makes sense, the issue appeared to be all of those things at once.
Switching it around as to allow color writes (but frag shader outputting 0's) caused visual flickering, so some of the elements were getting written to.
Also, I tried a memory barrier initially, but it didn't work. I tried the shader_storage_barrier_bit, but what I needed was the command_barrier_bit.
Now most models aren't rendering, which is good, but there are still a few. Might be a logic issue with how I've set it up.
Also, for whatever reason it only does all this when writing to the buffer from the vertex shader; the frag shader does nil.
Sorry for mini blog-posts, explaining things like this even helps me visualize the problem a bit more to.
Okay so I'm vaguely drunk from a friday work event so this probably isn't very coherent but: that should be safe assuming you have memory barriers placed properly? Idk, this is all SO different in Vulkan, cus there I can place a memory barrier explicitly for different resources (and even regions of those resources) between stages of the pipeline, including between the vertex and fragment shaders lol.
I could use more details on exactly what drawcalls you're making here: I'm not really understanding how or why you're using the draw indirect buffer as a storage buffer at the same time, in the same "pass"? (though idk how the concept of passes works in OpenGL)
Much further into this and I won't be of any real aid, it's just too far removed from how things work in Vulkan. Your best bet (imo) is going to be really looking at the GPU Gems article and repository I linked on the last page, cus that's actually in opengl
I've been getting a feel for C++ now. Looking back some of the functions are pretty simple. The only thing stumping me now is getting a program to read a text file while not displaying the numbers, but be able to read the lowest element, the highest element, and add the sum to them.
Code so far:
ifstream inFile; //modified
string filename;
void ReadNumbers();
// Get the filename from the user.
cout << "file name here to be read: ";
cin >> filename;
// Open the file.
inFile.open("file path.txt"); // new
// If the file successfully opened, process it.
if (inFile) // new true if input file exists
{
// Read the numbers from the file and
// display them.
while (inFile >> ReadNumbers)
{
cout << ReadNumbers << endl;
}
// Close the file.
inFile.close();
}
else
{
// Display an error message.
cout << "Error opening the file.\n";
}
Maybe it's just me but newpunch seems to have messed up the formatting in your code. The first line I see is ifstream inFile; //modifiedstring filename; and I assume that should be 2 lines.
Also I'm not sure what you're doing with inFile >> ReadNumbers and cout << ReadNumbers since you declare ReadNumbers as a function. Did you mean to call it?
Yeah that's what I meant.
I'm not really sure what's going on here tbh. Your formatting makes it really hard to read what's going on, but you can't use a stream operator on a function like that.
I'm working in .net core 2; UseCors isn't working for me. Here's what I have in startup.cs/ConfigureServices:
services.AddCors(options =>
{
options.AddPolicy("AllowAllOrigins",
builder =>
{
builder.AllowAnyOrigin();
builder.AllowAnyHeader();
builder.AllowAnyMethod();
builder.AllowCredentials();
}
);
}
);
and under startup.cs/Configure
app.UseCors("AllowAllOrigins");
Any ideas?
var results = _formationService.GetParticipantsForSessionId(pagination.SessionId);
/*foreach (var result in results)
{
if (result.Personne == null)
throw new NullReferenceException();
}*/
if (!String.IsNullOrEmpty(pagination.Search))
results = _formationService.GetParticipantsForSessionId(pagination.SessionId)
.Where(i => i.Personne.Prenom.Contains(pagination.Search)
|| i.Personne.Nom.Contains(pagination.Search));
int totalRows = results.Count();
if (results.Any())
{
if (pagination.RowsPerPage > 0)
results = results.Skip((pagination.Page - 1) * pagination.RowsPerPage).Take(pagination.RowsPerPage);
results = results.OrderBy(pagination.SortBy + (pagination.Descending ? " desc" : " asc"));
}
/*foreach (var result in results)
{
if (result.Personne == null)
throw new NullReferenceException();
}*/
var rows = results.ToList();
/*foreach(var row in rows)
{
if (row.Personne == null)
throw new NullReferenceException();
}*/
var filteredTotalRows = rows.Count();
return Json(new
{
Rows = rows,
FilteredTotalRows = filteredTotalRows,
TotalRows = totalRows
});
I got the strangest problem I've encountered in C# with this code. Look at the commented block of codes, if I uncomment them the value row.Personne/result.Personne will not be null.
The same thing applies if I watch it step by step in debug mode.
If I don't watch or reference it it's getting nulled for whatever reasons. Anyone ever had this problem?
I'm trying to show the performance gains of using different methods of finding primes up to a certain maximum, using Python.
def is_prime(a):
global table
checks = 0
if a <= 1:
return [False, 0]
else:
checks = checks + 1
for j in range(0, len(table), 1):
checks = checks + 1
if a % table[j] == 0:
return [False, checks]
else:
continue
table.append(a)
checks = 0
return [True, checks]
Basically, the value of a is compared to every value I have stored in the table, where table contains every prime that I've found previously.
If at any point, for some value of A and J, that A % table[J] == 0, then A is not prime. Otherwise, if the loop went through all the primes in the table and none of them satisfied the A % table[J] == 0 condition, then A is prime and is appended to the table.
My other two implementations set boundaries for the loop, where one implementation goes up to A/2, and the other implementation goes up to the square root of A.
The checks variable is to count how many times the loop had to try doing a modulo operation before figuring out whether the number was prime or not. I then return a list containing a boolean of whether A is prime or not, as well as how many checks it takes.
Last thing is that I'm using the Cython module to compile my code for faster execution speeds. I'm testing the speeds between native Python code, compiled Python code, and compiled Python code with manual optimizations.
In total, I have 9 total different tests that occur. The strange thing is, when testing the compiled and manually-optimized-compiled versions of the program, I'm getting some weird numbers.
The non-boundary manually-optimized-compiled version of the program takes 60.771 seconds seconds, but for some reason the manually-optimized-compiled with the a/2 boundary is taking 64.4279 seconds, which is longer. The manually-optimized-compiled with the sqrt(a) boundary takes a measly 34.3189 seconds.
Why would the program take longer when it's dealing with half the numbers in question? This doesn't seem to happen when using native Python implmentation, so I'm guessing it has something to do with the compiled code being slower somehow.
I always choke when it comes to math, so please help.
Basically, I'm making a simple soccer game using C# in Unity.
A regular soccer match is 90 minutes, but my games matches will last for 5 minutes, but the timers text will still gradually go from 0 to 90, just like it is in other soccer games.
You want to convert the percentage along a 5 minute match to the percentage along a 90 minute match.
The basic math problem is:
x/5min = y/90min
where x is the current time and y is the number you want to display
first of all, converting to seconds will give you more precision, so do that.
5 minutes = 300 seconds
90 minutes = 5400 seconds
So the formula is
x/300sec = y/5400sec
Solve for y:
(5400 * x)/300 = y
this is the number you want to display IN SECONDS. To get it in minutes, simply convert back to minutes:
((5400 * x)/300)/60 = y
So, for example, 2.5 minutes (150 seconds) out of a 5 minute match will convert to 45 minutes using this formula.
---
Now, you probably also want to show seconds. Simple! Instead of dividing by 60, you use the modulus operation:
((5400 * x)/300) % 60 = y
So use the first formula to get the minutes, then the second formula to get the seconds. So, 0.3 minutes (18 seconds) out of a 5 minute match becomes 5.4 minutes out of a 90 minute match. eliminating the .4 gives you five minute.
Using the second formula, we get 24 seconds. for 5:24.
If you want to use milliseconds, you can convert down further by converting minutes to milliseconds (multiply minutes by 60000) and use modulo 60000 for seconds
That actually worked, at least for the minutes and partly for seconds.
This is how it looks:
if (GameManager.GameOn)
{
curPlayedTimeMins += ((5400 * Time.deltaTime) / playTime) / 60;
curPlayedTimeSecs += ((5400 * Time.deltaTime) / playTime) % 60;
}
timeText.text = Mathf.Floor(curPlayedTimeMins).ToString() + ":" + Mathf.Floor(curPlayedTimeSecs);
Now, for every 60 seconds, the minute counts +1, but instead of resetting at 60, it starts from 60 to 120, and 120 to 180 etc. The seconds don't reset after reaching 60.
I'm not sure of the exact details of what Time.deltaTime is giving you but I think the issue may be that you're +=ing the seconds, or at least never resetting it to 0. Adding another if check to see if curPlayedTimeSecs >= 60 then -=ing 60 if it is should fix the problem.
I should of figured that out, it's kind of obvious and worked. Thanks!
I'm trying to get my sprites in MonoGame to render at the correct depth but I'm not getting it to work like I want it to. Stuff renders on top of whatever it wants right now.
https://pred.me/pics/Game1_2018-05-05_08-55-03.png
The white pixels are supposed to be stars and I want them behind my other objects.
My entity class has a depth variable that I set for all other game objects inherited from entity. The star entity for example has a depth of 0f, while the other game objects have a depth of 1f.
Entity class draw
public virtual void Draw(SpriteBatch spriteBatch, float depth)
{
spriteBatch.Draw(texture, Position, null, color, Orientation, Size / 2f, 1f, 0, depth);
}
Entity Manager class draw
public static void Draw(GraphicsDevice gd, SpriteBatch spriteBatch)
{
t = new Texture2D(gd, 1, 1);
t.SetData(new[] {Color.White });
int bw = 1;
foreach (var entity in entities.ToList())
{
entity.Draw(spriteBatch, entity.Depth);
//spriteBatch.Draw(t, new Rectangle(entity.BoundingBox.Left, entity.BoundingBox.Top, bw, entity.BoundingBox.Height), Color.White);
//spriteBatch.Draw(t, new Rectangle(entity.BoundingBox.Right, entity.BoundingBox.Top, bw, entity.BoundingBox.Height), Color.White);
//spriteBatch.Draw(t, new Rectangle(entity.BoundingBox.Left, entity.BoundingBox.Top, entity.BoundingBox.Width, bw), Color.White);
//spriteBatch.Draw(t, new Rectangle(entity.BoundingBox.Left, entity.BoundingBox.Bottom, entity.BoundingBox.Width, bw), Color.White);
}
}
Relevant code from main class draw
spriteBatch.Begin(SpriteSortMode.BackToFront, null);
EntityManager.Draw(GraphicsDevice, spriteBatch);
Enemy entity class constructor
public Enemy(Texture2D texture, Vector2 position)
{
this.texture = texture;
Position = position;
Radius = texture.Width / 2f;
color = Color.Red;
Acceleration = 1f;
Depth = 1f;
CoolDown = 0f;
}
Any idea what I'm doing wrong here? I'm passing the entity.Depth to the draw function which I'd think would work.
https://msdn.microsoft.com/en-us/library/bb196412%28v=xnagamestudio.30%29.aspx?f=255&MSPPError=-2147217396
0.0f is the frontmost layer, so inverse the depths ans I think you'll be good.
That did the job, thanks!
Is it possible to reconstruct position from a paraboloid distorted depth map?
For a while I've been storing my shadow's world-position alongside the depth map because some shadow casters use perspective matrices but others use the dual-paraboloid approach.
However, I'm looking to change that, but I'm not sure if its possible to even reconstruct it's position.
Here's how the projection calculation works:
// Transform position to the paraboloid's view space
gl_Position = ViewMatrix * ModelMatrix * vec4(Vertex, 1);
// Only need 1 view matrix, multiply by -1 for second viewing direction
gl_Position.x *= Direction;
gl_Position.z *= Direction;
const float fLength = length(gl_Position.xyz);
gl_Position /= fLength;
gl_Position.xy /= 1.0f - gl_Position.z;
gl_Position.z = (fLength) / (FarPlane);
gl_Position.w = 1.0f;
Anyone had any luck getting Source SDK 2013 to compile in newer versions of Visual Studio? I'm using Visual Studio 2017 which I know isn't supported, but I've generated a .sln using the fix at https://developer.valvesoftware.com/wiki/Source_SDK_2013, but I get compile errors which I can't figure out.
Most of the problems seem to be in memoverride.cpp - I get a few errors on line 122 of https://github.com/ValveSoftware/source-sdk-2013/blob/0d8dceea4310fde5706b3ce1c70609d72a38efdf/mp/src/public/tier0/memoverride.cpp - this declaration has no storage class or type specifier, syntax error: 'void' should be preceded by ';' and missing type specifier - int assumed. Note: C++ does not support default-int
By eye it all seems okay but it's like the compiler thinks there's a missing or stray semicolon somewhere.
Anyone had any luck with this? It seems like memoverride.cpp has been the source of a few issues over the years but none of the older solutions I've looked at have helped.
Looks like ALLOC_CALL is a preprocessor for a VC compiler directive for _CRTNOALIAS and _CRTRESTRICT
It looks like _CRTNOALIAS is an abstraction of the ms-specific noalias | Microsoft Docs
_CRTRESTRICT doesn't seem to be anything, maybe. So the two errors may be explained by reading them backwards:
missing type specifier - int assumed. Note: C++ does not support default-int
The compiler, after evaluating pre-processors, is reading line 22 as:
noalias _CRTRESTRICT void *malloc( size_t nSize ) {
//etc.
}
So it's interpreting a variable or function declaration as:
noalias _CRTRESTRICT
Which has no type, so you get the above error. int pre-ANSI C, int is the assumed type if no type is specified. For ANSI C (and perhaps C99, not sure), this was preserved for legacy code. This is no longer supported by modern standards so you get an error
Then the second error is:
this declaration has no storage class or type specifier, syntax error: 'void' should be preceded by ';'
Which is popping up because the previous statement above is being interpreted as an implicit-int variable declaration, but does not end with a semicolon, so the next function declaration, void *malloc, is throwing a "missing semicolon" error.
The fix? Without knowing anything about this code, the ms compiler, or what _CRTRESTRICT was supposed to be, is the put
#define _CRTRESTRICT
In between line 112 and 113 and see if that does anything. Otherwise, figure out what _CRTRESTRICT is SUPPOSED to say and see if it's been deprecated between 2013 and 2017.
Ah thanks - yeah I got thrown off because by appearance _CRTNOALIAS seemed like something that was provided(similar to _CRTRESTRICT in corecrt.h), and the Internet suggested that it was a thing, but it's possibly been moved to some other header file since 2013. Looks like this is what it should be:
#define _CRTNOALIAS __declspec(noalias)
I need help with my thread (here)
For those too lazy to click the thread;
Hello, how do you compile a C/C++ GarrysMod module into a .dll for linux to use with the linux version of the garrysmod server, I've been able to successfully compile my module into a win32 module and require() it in a windows server but can't seem to compile it for linux, I'm assuming you use the maker file and some kind of wingw32 or fake linux for windows but everytime I do, the linking to libraries used (such as lua_shared and openssl/cryptopp) do not link properly and are not found.
Some ground rules I'm following and using:
I'm using garrysmod_common from danielga ( Github ) so I can access lua_shared libraries because I can't find a source to link
all sources found so far points me to compiling it on windows as I've already tried to compile normally on linux and the dll is just a renamed .so file.
Before you say have you tried compiling on linux: yes, but the end result is always either a fake dll or a .so file.
Any help is appreciated.
So did anything I suggest work? What was your solution?
Here we go again
Is it possible to use AddForceAtPosition, and add extra force to the ball, but without messing up the position the ball is supposed to head towards?
Vector3 force = new Vector3(teammate.position.x * passingForce.x, teammate.position.y * passingForce.y, teammate.position.z * passingForce.z);
ball.GetComponent<Rigidbody>().AddForceAtPosition(force, teammate.position);
Obviously by multiplying, the ball goes faster, but it messes up the position it is supposed to head towards (Which is the regular teammate.position).
Are you throwing the ball at the teammate, or is the teammate throwing the ball? What is the target?
The teammate is the target.
It looks like the "Position" variable is supposed to be the place where the force is being applied ON THE OBJECT being "forced".
https://docs.unity3d.com/ScriptReference/Rigidbody.AddForceAtPosition.html
Applies force at position. As a result this will apply a torque and force on the object.
For realistic effects position should be approximately in the range of the surface of the rigidbody. This is most commonly used for explosions. When applying explosions it is best to apply forces over several frames instead of just one. Note that when position is far away from the center of the rigidbody the applied torque will be unrealistically large.
It looks like this is used to apply force to different PARTS of a rigid body.
So, my assumption is that for a simple ball, you may want to just use AddForce instead: Unity
Then calculate your force vector to point in the direction of your teammate. basically just subtract the current vector from the target position.
https://www.mathsisfun.com/algebra/images/vector-subtract.gif
Like that (where a is the position of the target and b is the position of the ball. a-b is the force vector you want to apply to the ball.)
However, this force will be more massive the further the player is from the ball. So it looks like you can use Clamp Magnitude: Unity
Like this:
Vector3.ClampMagnitude(MyVector, 1.0)
Then you can multiply this vector by the initial force magnitude (Velocity, I believe) you want apply to it.
NOTE: I've never used Unity in my life so this is an educated guess based off of what I've read plus what I know about physics and vectors. Someone with more Unity experience might be able to correct anything here
Yeah #define _CRTNOALIAS seems to be what some previous version of Visual Studio did.
Sorry, you need to Log In to post a reply to this thread.