[QUOTE=war_man333;46850452]It magically solved itself, out of nowhere.
But since you ask so nicely, C++ and not creating any threads.[/QUOTE]
Good, because I have no clue about that :v:
Let's say I need some sort of list that can store chars, and it should have a million spaces.
Obviously it would take some time to put a 'p' in every space if it's a vector I'm using.
What's the fastest container to set all spaces to a char?
You can initialise a vector with its constructor:
[code]
int size = 5;
char init = 'p';
std::vector<char> vec(size, init);
[/code]
However if you were to have a large size then you'd need to be careful adding elements to the vector afterwards as I believe it would increase its size by a large amount.
In the BMP standard, How do I know when the pixel data starts? Are there some uniform bytes preceding it to mark it, or is it at a fixed location?
[QUOTE=proboardslol;46855034]In the BMP standard, How do I know when the pixel data starts? Are there some uniform bytes preceding it to mark it, or is it at a fixed location?[/QUOTE]
according to this [url]http://en.wikipedia.org/wiki/BMP_file_format[/url]
the pixelarray offset is at the last 4 bytes of the 14 byte header.
Just a general question, if I'd want to create a regular application on a mobile, which software (For example Xamarin) would you prefer?
[QUOTE=Persious;46855988]Just a general question, if I'd want to create a regular application on a mobile, which software (For example Xamarin) would you prefer?[/QUOTE]
Depends on the target platforms and what you want out of the application. What do you have in mind?
[QUOTE=FalconKrunch;46855087]according to this [url]http://en.wikipedia.org/wiki/BMP_file_format[/url]
the pixelarray offset is at the last 4 bytes of the 14 byte header.[/QUOTE]
Will the array offset ever need 4 bytes? If so, how should I handle such a case. In one image, for example, the 4 bytes are 10001010 (138 in base 10), but then it's followed by 3 empty bytes (00000000). how should I normally handle this?
It's nothing specific, I'm just wondering which ones are good out there if you want to teach yourself within the mobile world.
[QUOTE=proboardslol;46856742]Will the array offset ever need 4 bytes? If so, how should I handle such a case. In one image, for example, the 4 bytes are 10001010 (138 in base 10), but then it's followed by 3 empty bytes (00000000). how should I normally handle this?[/QUOTE]
I'm not sure I understand what you're asking, but the specification says there are 4 bytes for the pixel array offset. You read in those four bytes, and you seek to that position in the file and start reading pixel data.
What's the easiest way to run a periodic task in python? I just need to run a single line of code once every 24 hours so I don't want to use any advanced scheduling libraries
[QUOTE=BackwardSpy;46857646]I'm not sure I understand what you're asking, but the specification says there are 4 bytes for the pixel array offset. You read in those four bytes, and you seek to that position in the file and start reading pixel data.[/QUOTE]
What I mean is if I read all 4 bytes as a single number, I'd have a massive number in the upper ranges of what a 32 bit integer can have. The number, instead of 10001010, would be 10001010000000000000000000000000 which is a gigantic number. I'm saying, what if two of the 4 bytes are used, and I'm only handling one? Or is it unlikely that all 4 would ever be used
[QUOTE=proboardslol;46858147]What I mean is if I read all 4 bytes as a single number, I'd have a massive number in the upper ranges of what a 32 bit integer can have. The number, instead of 10001010, would be 10001010000000000000000000000000 which is a gigantic number. I'm saying, what if two of the 4 bytes are used, and I'm only handling one? Or is it unlikely that all 4 would ever be used[/QUOTE]
It's 4 bytes, those 4 bytes represent a number. You read in those 4 bytes and you store them as whatever 4 byte numeric data type your language offers you. You then use this number to find where the pixel array is stored.
I think maybe you've misunderstood how it works or something, but hopefully I've helped to clear it up.
[QUOTE=BackwardSpy;46858246]It's 4 bytes, those 4 bytes represent a number. You read in those 4 bytes and you store them as whatever 4 byte numeric data type your language offers you. You then use this number to find where the pixel array is stored.
I think maybe you've misunderstood how it works or something, but hopefully I've helped to clear it up.[/QUOTE]
I understand how it works, but when I xxd the file, the number at 0xB to 0xE is:
(0xB) -> (0xE)
10001010 00000000 00000000 00000000
Obviously those last three bytes aren't significant, or else the data would begin at 0x8A000000. (or 2315255808 in decimal).
Am I meant to read the bytes backwards? that's how the data itself is represented (TBGR)
[QUOTE=proboardslol;46858460]I understand how it works, but when I xxd the file, the number at 0xB to 0xE is:
(0xB) -> (0xE)
10001010 00000000 00000000 00000000
Obviously those last three bytes aren't significant, or else the data would begin at 0x8A000000. (or 2315255808 in decimal).
Am I meant to read the bytes backwards? that's how the data itself is represented (TBGR)[/QUOTE]
Read up on big and little endian. You need to learn how to use Google.
Also if I know both my data format and processor format are always going to have the same endian, I simply alias a position in the data with the type I want.
[QUOTE=elevate;46858490]Read up on big and little endian. You need to learn how to use Google.
Also if I know both my data format and processor format are always going to have the same endian, I simply alias a position in the data with the type I want.[/QUOTE]
[url]http://en.m.wikipedia.org/wiki/Ten_Little_Indians[/url] ?
[sp]jk thank you for the help[/sp]
Ohhh, I get it now, you were referring to big endian vs little endian order. Sorry I couldn't be more helpful, I didn't realise.
I have a list of unordered points in an array, like this:
vertices = [0, 0, 32, 32, 0, 32, 64, 0, 64, 96, 32, 96, 0, 96, 32, 128, 128, 128, 160, 96, 32, 160, 0, 160, 160, 160, 128, 160]
Where each pair of numbers is a position, such that vertices[0] and vertices[1] forms one point. This particular array of points forms a shape like this:
[IMG]http://puu.sh/e5CLl/1397b764e6.png[/IMG]
What's an algorithm to order the vertices so that they are in a consistent order when drawing them? For instance, from 0, 0 to 0, 32, to 32, 32 to 32, 96, to 0, 96 and so on... or from 64, 0 to 64, 96 to 160, 96 to 160, 160 to 128, 160 and so on...?
You could do it with convex but not with a concave shape, afaik.
There are bound to be datasets with multiple possible solutions for concave shapes, so even if you did manage to find a solution it wouldn't necessarily be what you're looking for.
If you have too do this, you could probably create a somewhat solution, by bruteforcing while avoiding line intersections (and diagnoal lines?) and try to find a solution with the largest volume.
[QUOTE=Cold;46860096]You could do it with convex but not with a concave shape, afaik.
There are bound to be datasets with multiple possible solutions for concave shapes, so even if you did manage to find a solution it wouldn't necessarily be what you're looking for.
If you have too do this, you could probably create a somewhat solution, by bruteforcing while avoiding line intersections (and diagnoal lines?) and try to find a solution with the largest volume.[/QUOTE]
Had the same problem. What Cold said is true, without more data its impossible since there are multiple solutions.
2 ways:
a) You already know what shape you want so use that to come up with a sorting algorithm that produces the sequence you want (thats how I solved it)
b) Do what Cold suggested, but then you will have no control over the resulting shape.
Well, I reached the same conclusion in the end that it's impossible to do it properly, but I ended up finding an algorithm that works, I just had to code the whole thing from start again :suicide: [URL]http://www.gogo-robot.com/2010/02/01/converting-a-tile-map-into-geometry/[/URL]
(1) Compute the convex hull of the point set. Initialize your polygon P with this convex hull.
(2) Create a list L of all the points not on the convex hull.
While the list is not empty:
(3) Select one point Q from the list L
(4) Select two points QL and QR from the polygon P that share an edge with each other
(5) Insert the point Q into the polygon P between the points QL and QR (deleting the edge QL-QR, and creating the edges QL-Q and Q-QR)
The convex hull of a point set is unique. But for concave polygons, depending on how you choose your points Q, QL and QR, your concave polygons vary. At the very least you should choose them so that QL-Q and Q-QR do not intersect with other edges. You could, for example, choose Q randomly and pick QL and QR based on the closest edge to Q.
So how do I read bytes little endian using fread?
[QUOTE=proboardslol;46862514]So how do I read bytes little endian using fread?[/QUOTE]
On a little-endian machine:
[code]
int little;
fread (&little, sizeof (int), 1, file);
[/code]
If you want to adjust the byte order depending on the endian of the machine you can use a macro.
I also take the actual conversion code out of the macro to avoid macro expansion bugs.
[code]
int swap_endian_int32 (int x)
{
return ((x >> 24) & 0xFF) |
((x << 8) & 0xFF0000 |
((x >> 8) & 0xFF00) |
((x << 24) & 0xFF000000);
}
#ifdef LITTLE_ENDIAN
#define LITTLE_INT32(x) (x)
#else
#define LITTLE_INT32(x) (swap_endian_int32 (x))
#endif
...
int little;
fread (&little, sizeof (int), 1, file);
little = LITTLE_INT32 (little);
[/code]
I'll leave handling the endian type of the other primitives as an exercise to the reader.
Do note that only primitives like int and float are affected by endian order. The order of elements in a struct are not affected by big or little endian, although the elements themselves are.
However, element padding becomes an issue when reading in structs raw. The only way to solve that problem is with compiler specific functionality.
In order to avoid conditional compilation, you can use
[cpp]/* Little-endian to host */
uint32_t ltohl(uint32_t x)
{
uint8_t tmp[4];
memcpy(tmp, &x, 4);
return ((uint32_t)tmp[3] << 24) |
((uint32_t)tmp[2] << 16) |
((uint32_t)tmp[1] << 8) |
((uint32_t)tmp[0]);
}
/* Host to little-endian */
uint32_t htoll(uint32_t x)
{
uint8_t tmp[4];
tmp[3] = (uint8_t)(x >> 24);
tmp[2] = (uint8_t)(x >> 16);
tmp[1] = (uint8_t)(x >> 8);
tmp[0] = (uint8_t)(x);
memcpy(&x, tmp, 4);
return x;
}[/cpp]
So as some of you may know, lately I've been working on a path tracer.
However, as it's a fairly difficult subject and I've never had any courses about it, I'm relying on sources I find on the internet. Sadly, those sources are often hard to fully understand as they often assume the reader has some knowledge about the subject.
For this reason, I'd like to ask a couple of questions in this thread so everyone can benefit from given answers.
So, my first question, as I'll be asking one thing at a time:
What exactly is a [URL="http://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function"]BRDF[/URL], what does it do and how does it work?
My understanding so far is that it's a function which takes an input direction and an output direction and performs some magic.
Could anyone perhaps explain the BRDF for a diffuse surface in detail, as an example? Or perhaps any sources out there which explain it in detail?
[editline]derp[/editline]
Hm, is it perhaps how much the given output direction influences the result?
So for example, with a perfect mirror, the result would always be 0, except when the output direction is the perfect reflection of the input direction? But if this is the case, how does a refraction brdf work?
[QUOTE=ThePuska;46866274]In order to avoid conditional compilation, you can use
[cpp]/* Little-endian to host */
uint32_t ltohl(uint32_t x)
{
uint8_t tmp[4];
memcpy(tmp, &x, 4);
return ((uint32_t)tmp[3] << 24) |
((uint32_t)tmp[2] << 16) |
((uint32_t)tmp[1] << 8) |
((uint32_t)tmp[0]);
}
/* Host to little-endian */
uint32_t htoll(uint32_t x)
{
uint8_t tmp[4];
tmp[3] = (uint8_t)(x >> 24);
tmp[2] = (uint8_t)(x >> 16);
tmp[1] = (uint8_t)(x >> 8);
tmp[0] = (uint8_t)(x);
memcpy(&x, tmp, 4);
return x;
}[/cpp][/QUOTE]
Very clever! It takes advantage of the byte ordering that int already has. Easier to read and uses the stdint types too.
Anyone knows how to simulate snow particle motion in 3D?
[QUOTE=Fourier;46868318]Anyone knows how to simulate snow particle motion in 3D?[/QUOTE]
Try experimenting with random values, something that came into my mind is this
Interpolate between 100% and 10% (random) of maximum fall speed for Z and interpolate between 100% and -100% (also random) of maximum offset speed for X and Y separately (considering Z points up/down)
[QUOTE=xThaWolfx;46867519]So as some of you may know, lately I've been working on a path tracer.
However, as it's a fairly difficult subject and I've never had any courses about it, I'm relying on sources I find on the internet. Sadly, those sources are often hard to fully understand as they often assume the reader has some knowledge about the subject.[/QUOTE]
Did you see my [url=http://facepunch.com/showthread.php?t=1444284&p=46845009&viewfull=1#post46845009]earlier post[/url] about the [url=http://www.pbrt.org/]book[/url] I've been reading?
[QUOTE=xThaWolfx;46867519]What exactly is a [URL="http://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function"]BRDF[/URL], what does it do and how does it work?[/QUOTE]
A BRDF basically is just a function that takes a pair of directions and returns the fraction of light coming from one direction that's reflected to the other. Depending on how sophisticated your renderer is, it can also take some other arguments: the point where the reflection occurs (to allow reflecting light differently in different places. i.e. textures), the wavelength of light being reflected (to allow reflecting different amounts of different wavelengths, i.e. colors), and the time when the reflection occurs (to allow for animation).
The "magic" is in how you [i]use[/i] the BRDF: it's a component of the [url=https://en.wikipedia.org/wiki/Rendering_equation]rendering equation[/url]. Basically, the amount of reflected light leaving a surface in a particular outgoing direction is the integral over all possible incoming directions of: the amount of light coming from the incoming direction, times the cosine of the incoming direction's angle with the surface, times the BRDF that says how much light from the incoming direction is reflected to the outgoing direction, times the differential solid angle in the incoming direction.
(The "differential solid angle" part bears a little explanation, since I don't know how familiar you (or other people here) are with calculus. An integral basically represents the sum of an infinite number of infinitesimal (infinitely small but nonzero) things, and a differential represents the infinitesimal things being added. It's the "d" in "∫f(x) dx". This post isn't meant to be a full explanation of integration, but the key thing to understand is that differential solid angle isn't a number that you actually calculate; it's just part of the integral expression that says what variable is being integrated over.)
When you use a Monte Carlo method like path tracing, you're [i]estimating[/i] the value of this integral by statistical sampling: pick a random value of the variable being integrated (a direction), evaluate the expression for that value (by tracing a ray to find the light coming from the direction, and multiplying it by the BRDF and cosine terms), and assume that the result is the same for [i]all[/i] directions (this is the estimation part). Average together enough random samples of different directions and you get something that's reasonably close to the real value of the integral.
I'm not going to go into a lengthy explanation of Monte Carlo integration here (though it could be the topic of another post), but here's the key thing to know: you can pick your random direction according to any [url=https://en.wikipedia.org/wiki/Probability_distribution]probability distribution[/url] that's nonzero over the directions that light can reflect from, then calculate the reflected light coming from that direction, then divide by the [url=https://en.wikipedia.org/wiki/Probability_density_function]probability density[/url] with which you chose the direction to get what mathematicians call an "unbiased estimator" of the integral over all directions. In other words, it's OK to weight your random sampling so that some directions are checked more often than others, as long as you adjust the light values from those directions to compensate. You'll see what I mean in a moment.
[QUOTE=xThaWolfx;46867519]Could anyone perhaps explain the BRDF for a diffuse surface in detail, as an example?[/QUOTE]
A [url=https://en.wikipedia.org/wiki/Lambertian_reflectance]Lambertian[/url] (ideal diffuse) surface reflects light equally in all directions, so its BRDF is a constant: the surface's [url=https://en.wikipedia.org/wiki/Albedo]albedo[/url], divided by pi for reasons related to conservation of energy.
Let's say a ray from your camera hits a Lambertian surface with an albedo of 0.7 (meaning that it reflects 70% of the light that hits it, and absorbs the other 30%), and you're going to estimate the light leaving the surface along that ray using Monte Carlo integration. You start by picking a random direction (in the hemisphere above the surface) with uniform probability density (meaning that all directions are equally likely to be chosen), then figure out how much light is coming from that direction somehow (such as by tracing a ray in the direction, i.e. path tracing). Take that light value and multiply it by the BRDF (0.7/pi) and the cosine term.
Now you need to divide by the probability density to get an unbiased estimator of the integral. Since you chose the direction with uniform density, the density is a constant: 1/(2*pi), since the direction was chosen from a hemisphere and the surface area of a unit hemisphere is 2*pi. Dividing by 1/(2*pi) is the same as multiplying by 2*pi, so this makes sense intuitively: you can estimate the integral of a function over the directions in a hemisphere by picking a random direction, evaluating the function for that direction, and multiplying the result by the surface area of a unit hemisphere. The estimate of light leaving the surface along the original ray is the incoming light from the random direction, times (0.7/pi) from the BRDF, times (2*pi) representing the hemisphere, times the cosine term. The multiplication and division by pi cancels out, so it's really just the incoming light times 1.4 times the cosine term.
Let's go back a step and say that instead of choosing your random direction with [i]uniform[/i] density, you weight it so that the probability density of each direction is proportional to the cosine of its angle with the surface normal. In other words, directions that are closer to perpendicular with the surface are chosen more often than directions that go "along" the surface. You might think that this would cause light from some directions to be over-represented and others to be under-represented, but the division by the probability density compensates for that.
The incoming light, BRDF, and cosine terms from the rendering equation are the same, but now the probability density is ([i]d[/i]/pi), where [i]d[/i] is the cosine term for the direction you chose. Note that this is the same cosine term in the rendering equation. Dividing by ([i]d[/i]/pi) is the same as multiplying by (pi/[i]d[/i]), so the result for the whole thing is the incoming light from the random direction, times (0.7/pi) from the BRDF, times (pi/[i]d[/i]) from the cosine-weighted hemisphere, times [i]d[/i] (the rendering equation's cosine term). This time both pi and [i]d[/i] cancel out, so it's just the incoming light times 0.7 (the surface's albedo).
Both of these strategies (uniform sampling and cosine-weighted sampling) will converge to the same result, but the cosine-weighted sampling will generally converge faster since the random samples are concentrated more in directions that reflect more light.
[QUOTE=xThaWolfx;46867519]So for example, with a perfect mirror, the result would always be 0, except when the output direction is the perfect reflection of the input direction?[/QUOTE]
Yep. The BRDF for a perfect mirror is a little weird, since it's a [url=https://en.wikipedia.org/wiki/Dirac_delta_function]Dirac delta function[/url]: infinite in one direction, zero everywhere else. But when you choose your "random" reflection direction by just taking the exact mirror direction, you're using a probability density function that's [i]also[/i] a delta function: infinite probability density of choosing the mirror direction, zero probability of choosing anything else. When you multiply the incoming light by the BRDF and divide by the probability density, the infinities cancel out. (At least, they do on paper. Obviously, dividing infinity by infinity in a computer program isn't going to give you a useful result. Delta functions have to be handled as special cases in rendering software.)
[QUOTE=xThaWolfx;46867519]But if this is the case, how does a refraction brdf work?[/QUOTE]
Well, refraction would be a BTDF, not a BRDF, but the principle is the same: it's a delta function, just like a mirror. The only difference is which direction the delta "spike" points in.
…Sorry if this is a bigger answer than you wanted; I sort of got carried away with writing and then didn't have time to try to trim it down. Let me know if there's anything that doesn't make sense.
Sorry, you need to Log In to post a reply to this thread.