[QUOTE=Krinkels;47767041][url=http://vixra.org/abs/1503.0193]Saw this gem on vixra today.[/url]
My favourite part:
[img]http://i.imgur.com/xMrNvoA.png[/img][/QUOTE]
Okay, I didn't read the top part of the post for a second, and I thought, "Is Krinkels losing it? Are the rationals not closed under multiplication anymore?"
[editline]20th May 2015[/editline]
I googled "Christina Munns" and google autocompleted to "Christina Munns homeopathy." Yep.
[editline]20th May 2015[/editline]
Comparing the factors of the standard model gauge group with shit from the Vedas. Lol. This woman is ridiculous.
[QUOTE=cathal6606;47767650]I missed lectures on that back when I was doing calculus so I ended up just ignoring it. You're post made me go look it up and learn it. I seriously overestimated it, its not nearly as difficult as I thought, I guess the formula intimidated me.[/QUOTE]
I saw Taylor series many times and was also intimidated.
I still don't get it how it works with derivatives and shit, but it made the Eulers formula e^i*φ = cos(φ) + i sin(φ) even more cooler
[editline]21st May 2015[/editline]
Limits to infinity!
[video=youtube;lHqz839Xq8I]https://www.youtube.com/watch?v=lHqz839Xq8I[/video]
There are a stunning amount of websites that use a forward Euler as an approximation when finding the backward Euler solution.
[editline]22nd May 2015[/editline]
Numerically at least.
[QUOTE=Fourier;47773173]I saw Taylor series many times and was also intimidated.
I still don't get it how it works with derivatives and shit,[/QUOTE]
Well, think of it this way: If you know the derivative at a point is zero, you know that if you change the x-value you look at by a very small amount, your y-value won't change. The function is locally approximated by a flat line. But obviously, that doesn't tell you much about the shape of the function away from that point. If you know the [I]second[/I] derivative as well, it tells you concavity. Now you can tell which way the function is going as it leaves the point: up or down (or if the second derivative is zero, a different way in each direction). Now you have a parabola that approximates the function a bit better than the line did. And if you know the third derivative, and your second derivative was zero, you know which way goes up and which goes down, allowing you to approximate even better...
So it should be pretty clear that the more information you have about the derivatives of the function at a point, the better you can fit the nearby behavior of the function at the point, and as you improve your approximation, you can go further away from the point before your approximation starts to get way off. Taylor series is just extending that process as far as possible.
It's still pretty amazing that for some functions you can reproduce the function exactly everywhere by approximating with higher and higher degree polynomials, but of course this is not true for all functions! If your function is not analytic, this can fail, and analyticity is actually a very strong condition.
[editline]22nd May 2015[/editline]
Here's a fun example: f: R -> R given by f(x) = e^(-1/x) for x > 0 and f(x) = 0 for all other points. This is infinitely differentiable everywhere, but try to Taylor expand at the origin: all the function and all its derivatives are zero there, so Taylor expansion thinks the function vanishes everywhere, even though if you look any finite value in the positive direction, the function will be non-zero!
Thanks Johnny, now I understand the concept better!
So, if I expand the function e^(-1/x) at a point a=1 for example, I should get same function if I derive to infinity?
[editline]23rd May 2015[/editline]
Also I was thinking about linear algebra and matrices R^(infinity)x(infinity). Sounds like in this case matrix just becomes function with two variables, f(x,y). Is this hilbert space? I was looking around the internest and hilbert space is the closest I got.
[QUOTE=Fourier;47785723]Thanks Johnny, now I understand the concept better!
So, if I expand the function e^(-1/x) at a point a=1 for example, I should get same function if I derive to infinity?[/QUOTE]
Well, no. e^(-1/x) by itself is not analytic or differentiable (or even defined) on all of R, but the function I gave has derivatives of all orders everywhere, but the Taylor series does not always converge to the function itself.
[QUOTE=Fourier;47785723]Also I was thinking about linear algebra and matrices R^(infinity)x(infinity). Sounds like in this case matrix just becomes function with two variables, f(x,y). Is this hilbert space? I was looking around the internest and hilbert space is the closest I got.[/QUOTE]
I think you need a more precise definition of matrices over R^∞² in order to answer questions like that. What do the entries look like? Do you mean to specify the dimensions of the matrix?
Also, some motivation for why you think it's a function of two variables.
If you want to define a Hilbert space, you also need to figure out how to define scalar multiplication, vector addition, and the inner product.
[QUOTE=Gas/spg;47786072]I think you need a more precise definition of matrices over R^∞² in order to answer questions like that. What do the entries look like? Do you mean to specify the dimensions of the matrix?
Also, some motivation for why you think it's a function of two variables.
If you want to define a Hilbert space, you also need to figure out how to define scalar multiplication, vector addition, and the inner product.[/QUOTE]
Well R^∞² matrix can be function of two variables, and this function f(i,j) is bounded.
i exists in range [α,β] and j exists in range [γ, δ].
Inner product for this space is just multiplying integral between two functions, I think it's standard version - just not sum but integral.
I will speak to professor about this, just for fun.
[editline]24th May 2015[/editline]
[QUOTE=JohnnyMo1;47785924]Well, no. e^(-1/x) by itself is not analytic or differentiable (or even defined) on all of R, but the function I gave has derivatives of all orders everywhere, but the Taylor series does not always converge to the function itself.[/QUOTE]
Yeah, calculus prof. said that it doesn't converge for all functions. Sadly :v:.
[QUOTE=Fourier;47792587]Yeah, calculus prof. said that it doesn't converge for all functions. Sadly :v:.[/QUOTE]
I think it's pretty interesting that a function R -> R can have derivatives of all orders and not be analytic, but if a function C -> C is differentiable once everywhere, it is infinitely differentiable and analytic everywhere.
[QUOTE=Fourier;47792587]Well R^∞² matrix can be function of two variables, and this function f(i,j) is bounded.
i exists in range [α,β] and j exists in range [γ, δ].
Inner product for this space is just multiplying integral between two functions, I think it's standard version - just not sum but integral.
I will speak to professor about this, just for fun.[/QUOTE]
So if I'm understanding you correctly, the points in a square in R² are represented by the indices of the matrix.
This means that there are uncountably many rows and columns.
The problem is, I have a hard time fathoming a matrix with those dimensions.
Perhaps you could write down the matrix that corresponds to f(i,j) = i+j on [0,1]²?
[QUOTE=JohnnyMo1;47792991]I think it's pretty interesting that a function R -> R can have derivatives of all orders and not be analytic, but if a function C -> C is differentiable once everywhere, it is infinitely differentiable and analytic everywhere.[/QUOTE]
Nice, didn't knew that. Complex world is interesting but we have barely worked in it.
[editline]24th May 2015[/editline]
[QUOTE=Gas/spg;47793298]So if I'm understanding you correctly, the points in a square in R² are represented by the indices of the matrix.
This means that there are uncountably many rows and columns.
The problem is, I have a hard time fathoming a matrix with those dimensions.
Perhaps you could write down the matrix that corresponds to f(i,j) = i+j on [0,1]²?[/QUOTE]
Here, I hope it makes sense.
[IMG]http://i.imgur.com/IXdB4Ta.png[/IMG]
hey i am going off to college next year and i would like to go into a math-heavy major, i'm really interested in it. i didn't do as well as i could have earlier in highschool so i think i have some holes in my math knowledge. is there a test out there somewhere i could take that could identify my weaknesses in general high school math?
In that case, I don't see the point in calling them matrices. As far as I know, most of linear algebra only applies to matrices with countably many rows and columns.
With these, matrix multiplication is no longer well defined. Row equivalent matrices can correspond to different functions. They have no trace, no determinant and no eigenvalues.
As for the inner product, I'm assuming you take the double integral over the rectangle. Considering only the integrable functions, this forms an inner product space. I suspect it doesn't form a Hilbert space, however. See the [url=https://en.wikipedia.org/wiki/Inner_product_space#Examples]third example.[/url]
[editline]25th May 2015[/editline]
[QUOTE=tedb;47796766]hey i am going off to college next year and i would like to go into a math-heavy major, i'm really interested in it. i didn't do as well as i could have earlier in highschool so i think i have some holes in my math knowledge. is there a test out there somewhere i could take that could identify my weaknesses in general high school math?[/QUOTE]
The math placement tests that universities sometimes use are designed for just that. There's a possibility that your college will administer one if you're going into a math-heavy major.
As for one you can do now,
[url]http://www-math.umd.edu/cgi-bin/placement/index.cgi[/url]
this one seems good. It has a heavy focus on the algebra and trigonometry (but not so much on combinatorics and geometry).
If you know what books you'll need, you can start reading those to find the gaps.
[QUOTE=Gas/spg;47796924]In that case, I don't see the point in calling them matrices. As far as I know, most of linear algebra only applies to matrices with countably many rows and columns.
With these, matrix multiplication is no longer well defined. Row equivalent matrices can correspond to different functions. They have no trace, no determinant and no eigenvalues.
As for the inner product, I'm assuming you take the double integral over the rectangle. Considering only the integrable functions, this forms an inner product space. I suspect it doesn't form a Hilbert space, however. See the [url=https://en.wikipedia.org/wiki/Inner_product_space#Examples]third example.[/url]
[/QUOTE]
The trace is easy to define, it's just sum of diagonal - integral of f(x,x) dx.
But I agree with determinant and eigenvalue.. determinant is infinity operation and can't be computed so easily, maybe it converges.. maybe
As for eigenvectors, they surely exist it is just hard to calculate again.
And yes, I already defined multiplication, it's not really hard. Single integral for single element.
How can eigenvectors surely exist but not eigenvalues?
You've already defined the inner product, whose result is a real number, but not the matrix product, whose result is a matrix.
[QUOTE=Gas/spg;47804372]How can eigenvectors surely exist but not eigenvalues?
You've already defined the inner product, whose result is a real number, but not the matrix product, whose result is a matrix.[/QUOTE]
Sorry, I mixed those two.
As for matrix matrix multiplication, I defined it (on a paper), it's not hard really.
But I will stop now, because I don't think it is really useful.
Alright I got a math-ish problem at hand, and I'd like to get some help to solve it:
I run a Garrysmod community, which runs a specific gamemode and started off with 2 servers. These servers went really well and using gametracker.com we noticed that each server had roughly rank #100 on gametracker.
However, we had to expand and currently got 5 servers running same gamemode. This lets us divide the playerbase among more servers giving a better experience than having 2 completely filled servers constantly. Gametracker, however, doesn't know about this so the now 70% filled servers get a less rank because they aren't as popular anymore.
What I'd like to do is to combine the servers rank into 1 "shared" rank, which means that the more servers there is in this combination the better the rank should be. Is this what people call weighted average?
It's abit difficult to explain and maybe the answer is straight infront of my eyes but I've thought about this all day and I can't figure out any 100% accurate way of measuring.
You can't do a computation on the ranks directly, you need to know the playercounts of all the servers (not just your own). If you do, add up the players from all your servers, compare them to the players of the other servers and rank it accordingly.
It's totally subjective... There is no '100% accurate' measurement.
The simplest way is to just count up the total number of players, but are 10 servers with a handful of people each really better than one server that's packed?
Maths can help you calculate a result, but it can't tell you what result you want to be looking for!
I need "Why?" for complex functions and complex taylor series.
Where can I find complex functions in nature?
I am game developer, where can complex numbers and function help me?
I want to do robotics, are complex numbers my friends?
[QUOTE=Fourier;47831252]I need "Why?" for complex functions and complex taylor series.
Where can I find complex functions in nature?[/QUOTE]
The wave function of a particle is a complex function. Electrodynamics has a U(1) gauge symmetry, meaning it's symmetric under a choice of a complex phase. Quantum field theory has contour integrals eeeeeverywhere.
[QUOTE=Fourier;47831252]I need "Why?" for complex functions and complex taylor series.
[/QUOTE]
Also, contour integrals are like[URL="http://math.stackexchange.com/questions/tagged/contour-integration?sort=votes&pageSize=15"] the WD40 of real integrals[/URL].
Another one, Fourier transforms are extremely important. Fourier transforms often result in complex functions. MRI scanners measure k-space: i.e. the Fourier transform of the spatial distribution of mass in the patient "directly" (in a sense, it's strongly simplified). You then Fourier transform that measurement to go to real space, but that's still a complex function. The image you associate with an MRI is often the modulus of that complex function.
IMO a "why" shouldn't necessarily be motivated directly by nature though, sometimes it's just a goddamn useful tool.
Thanks guys!
I know I should learn math just for sake of it, but sometimes I get depressed because there is no point / no connection.
You know that 'aha!' moment? I miss this. But they do happen from time to time and this is what keeps me going.
[QUOTE=Cosa8888;47586163]I think I'm just fucking around and this is probably meaningless, but it's pretty weird considerating that if you can set a value for it you can essentially calculate the square root of two:
Let's start with this equation:
[IMG]http://latex.codecogs.com/gif.latex?x%3D1-x[/IMG]
The solution is x=1/2, right? Then:
[IMG]http://latex.codecogs.com/gif.latex?1/2%3Dx%3D1-x[/IMG]
Now, since x=1-x, what if we replace the x on the left side of the equation by the left side itself?
[IMG]http://latex.codecogs.com/gif.latex?x%3D1-(1-x)[/IMG]
Now, what if we do it and infinity of times?
[IMG]http://latex.codecogs.com/gif.latex?x%3D1-(1-(1-...\Rightarrow 1/2%3D1-(1-(1-...[/IMG]
Ok, here's where the mistake or nonsense is, but what if we calculate the square root of x?
[IMG]http://latex.codecogs.com/gif.latex?\sqrt{x}%3D\sqrt{1-x}\Rightarrow \sqrt{x}%3D\sqrt{1-\sqrt{1-...}}[/IMG]
Then:
[IMG]http://latex.codecogs.com/gif.latex?\sqrt{1/2}%3D\sqrt{1-\sqrt{1-...}}[/IMG]
It looks meaningless, and probably it is, but if the left side is a rational number, then this expresion:
[IMG]http://latex.codecogs.com/gif.latex?\sqrt{2}%3D\frac{1}{\sqrt{1-\sqrt{1-...}}}[/IMG]
Would make sense.
[/QUOTE]
Take x = sqrt( 1 - sqrt( 1 - sqrt( 1 - sqrt( 1 - ... ) ) ) )
Well this goes on an infinite amount of times. Meaning if it converges, we can do the same operation again and still find the same number. as follow:
sqrt( 1 - x ) = sqrt( 1 - sqrt( 1 - sqrt( 1 - sqrt( 1 - ... ) ) ) ) = x
so we have 1 - x = x^2 , giving x^2 + x - 1 = 0. The interesting result is x being the golden ratio. Simple maths, but definitely not sqrt( 1 / 2 ). If I am not mistaken, the mistake lies in substituting x = 1 - x an infinite number of times, because you still need to subtract the 1/2 at the end of it, something that is not being done in this case.
[QUOTE=dingusnin;47847946]Take x = sqrt( 1 - sqrt( 1 - sqrt( 1 - sqrt( 1 - ... ) ) ) )
Well this goes on an infinite amount of times. Meaning if it converges, we can do the same operation again and still find the same number. as follow:
sqrt( 1 - x ) = sqrt( 1 - sqrt( 1 - sqrt( 1 - sqrt( 1 - ... ) ) ) ) = x
so we have 1 - x = x^2 , giving x^2 + x - 1 = 0. The interesting result is x being the golden ratio. Simple maths, but definitely not sqrt( 1 / 2 ). If I am not mistaken, the mistake lies in substituting x = 1 - x an infinite number of times, because you still need to subtract the 1/2 at the end of it, something that is not being done in this case.[/QUOTE]
Actually both are mistakes:
He's square rooted it to get sqrt(x) = sqrt(1-x), but then when you substitute again, you need to substitute for x (and not sqrt(x) as he's done), to get sqrt(x) = sqrt(1- (1- (1- ... squaring this gives you the original series (which is pretty obviously going to happen)
What you've done is ignore the square root on the LHS, so you've written x = sqrt(1-x), which then does give you x = sqrt(1- sqrt(1 - ... but that has a different solution (that you've found)
I'm studying up on my algebra to start learning some algebraic geometry and I found this bit about polynomial rings in one variable:
"However, in general, X and its powers, X^k, are treated as formal symbols, not as elements of the field K or functions over it."
Why? Why can't we think of a polynomial as being a function from a point in affine n-space with components (a_0, a_1, ... , a_n) to a function from the field to itself given by x |-> a_0 + ... + a_n*x^n? As long as it's a finite degree polynomial I can't see what's being lost by thinking about it this way.
[QUOTE=JohnnyMo1;47855717]I'm studying up on my algebra to start learning some algebraic geometry and I found this bit about polynomial rings in one variable:
"However, in general, X and its powers, X^k, are treated as formal symbols, not as elements of the field K or functions over it."
Why? Why can't we think of a polynomial as being a function from a point in affine n-space with components (a_0, a_1, ... , a_n) to a function from the field to itself given by x |-> a_0 + ... + a_n*x^n? As long as it's a finite degree polynomial I can't see what's being lost by thinking about it this way.[/QUOTE]
The fewer assumptions you can make, the better. (Bear in mind I like Category Theory)
You only make it more restrictive when treating it as a function. You can always do so [i]when useful[/i] by considering the evaluation homomorphism, but if you keep it purely abstract you're free to work with it, safe in the knowledge that you haven't made any extra assumptions (like 'X is an element of our field').
I think this is particularly important in Galois Theory (which I've done more of than Algebraic Geometry) where you consider [i]new[/i] elements (not in your field) to put in place of X.
I'm currently doing pre calculus 12 and struggling like a motherfucker with it. It's been 6 years since I graduated high school and I'm going to go to university as soon as I can but this math is between me and that.
I've seriously never found anything as difficult as doing this type of math.
I find pretty much everything pre-calculus to be kind of a random assortment of shit you ought to know so it's tough to give people general guidance on what they should focus on, but if you ask questions about specific topics, we should be able to help.
Devils mathematicians at your disposal.
Sorry, you need to Log In to post a reply to this thread.