[QUOTE=proboardslol;47417506]Hey all. I dropped my calc class because of some things that happened with my transfer agreements, so I have to retake it next semester. I have the entire summer off to work, and plenty of time to study. What's the best book I can use to teach myself calculus, so that the class is easier when I get back next semester?[/QUOTE]
pickup a copy of schaum's mathmatical handbook while you're at it, it'll make homework much easier
[t]http://ecx.images-amazon.com/images/I/510x-bjdt1L._SY344_BO1,204,203,200_.jpg[/t]
i didn't get the purpose of this book until after i got done with my math courses, but its basically all the indexes out of my 2 math books
[QUOTE=Fourier;47429801]Hey, so why is it worth learning Geometric geometry and why is it better than Linear algebra?
By the way, doing mathematical optimization, stuff is hard.[/QUOTE]
...geometric geometry?
Hey guys, quick question. In my electromagnetism and SR class, my professor has written something I can't wrap my head around. Can anyone figure out how he went from (194) to (195) in the following:
[img]http://gyazo.com/5ed546ebebc6b986b45fe6ea5cf82aa7.png[/img]
I'm a pure mathematician at heart and all this hand wavey physicsy argumentation doesn't work well with me. I understand that there is more than likely a solid mathematical answer for what he did but I've tried everything I can think of (expanding \sqrt{1+x} at infinity). This guy has made a lot of mistakes in his lecture notes so if it is wrong then I guess I wouldn't be surprised.
I don't have any scrap paper lying around right now to check this explicitly, but if you multiply both the numerator and denominator by the denominator you will wind up with a H2H1/l^2 term in the numerator.
[QUOTE=JohnnyMo1;47432409]...geometric geometry?[/QUOTE]
sorry, I meant geometric algebra :v:
[QUOTE=sltungle;47441397]I don't have any scrap paper lying around right now to check this explicitly, but if you multiply both the numerator and denominator by the denominator you will wind up with a H2H1/l^2 term in the numerator.[/QUOTE]
Thanks for the reply. I tried that but it didnt really lead anywhere. Then I realised that by rationalising the denominator (by multiplying the numerator and denominator by the denominator's 'conjugate'), the denominator becomes 1 and I get a much more simplified expression on the numerator which gives an answer roughly like his. Thanks anyway though!
I've realized what is most likely my problem with the programming thing I was working on for Unity.
I was trying to smoothly transition one sine wave into another, with a lerping shift in frequency. So, from x = 0 to x = 10, the frequency might change from 0.25 to 1.0; sin(x*freq).
This resulted in strange super squishing of the wave, looking like a wave with a frequency higher than 1.
Am I correct that this is a Doppler effect, that my wave squishes together as one frequency approaches the next? This is a very undesirable accident. Does anyone have ideas how I might more properly transition one frequency to another?
[QUOTE=bitches;47449020]I've realized what is most likely my problem with the programming thing I was working on for Unity.
I was trying to smoothly transition one sine wave into another, with a lerping shift in frequency. So, from x = 0 to x = 10, the frequency might change from 0.25 to 1.0; sin(x*freq).
This resulted in strange super squishing of the wave, looking like a wave with a frequency higher than 1.
Am I correct that this is a Doppler effect, that my wave squishes together as one frequency approaches the next? This is a very undesirable accident. Does anyone have ideas how I might more properly transition one frequency to another?[/QUOTE]
You have sin(2*pi*v*t), where v is frequency?
Well, it is completely normal for it to be jumping.
Look at this code (at the bottom) it might help you. You need to change phase so the wave is not broken.
[url]http://answers.unity3d.com/questions/540309/math-changing-the-frequency-of-a-sine-wave-in-real.html[/url]
[QUOTE=JohnnyMo1;47423204]Weird, I just picked up Griffiths again yesterday, glanced through chapter 1, and thought, "Hey, this is a good vector calc review! I should work through this chapter."
But yes, Griffiths is really good for that[/QUOTE]
-snip-
Yes but that is copyrighted so I have snipped your link. Don't go sharing copyrighted stuff on FP.
[QUOTE=Fourier;47449237]You have sin(2*pi*v*t), where v is frequency?
Well, it is completely normal for it to be jumping.
Look at this code (at the bottom) it might help you. You need to change phase so the wave is not broken.
[url]http://answers.unity3d.com/questions/540309/math-changing-the-frequency-of-a-sine-wave-in-real.html[/url][/QUOTE]
But how can I determine the phase without computing every value before the current position (accumulating phase change like that example does)? I need to use this in a vertex shader, where all I'll have are positions to be changing the heights of.
[editline]3rd April 2015[/editline]
there surely must be a way to blend two waves by a percent at x time value without that method of continuous phase accumulation?
[editline]3rd April 2015[/editline]
blending more directly by lerping between finished sine values looks [I]interesting[/I]
[t]http://foxcock.me/web/images/zscreen/2015_04/Screenshot-2015-04-03_11.38.41.png[/t]
not ideal, though; a single sine wave stretched by different amounts is far more ideal so it doesn't have that strange stair steps look
[QUOTE=JohnnyMo1;47449661]Yes but that is copyrighted so I have snipped your link. Don't go sharing copyrighted stuff on FP.[/QUOTE]
Ah shit, my apologies - didn't realise. Glad that it's the right thing though.
[t]http://i.imgur.com/deX9RZQ.png?1[/t]
first grader esque stuff but its kinda cool to fiddle around with symbols like that
What is to be secretly rude?
To say "See you in N days!"
[QUOTE=Saturn V;47468927][t]http://i.imgur.com/deX9RZQ.png?1[/t]
first grader esque stuff but its kinda cool to fiddle around with symbols like that[/QUOTE]
So many divisions through zero it's not even funny anymore. I'd let yu fail every exam if I saw something like that.
Well I actually did let students fail there assignments when I corrected them for a semester.
[QUOTE=Killuah;47488288]So many divisions through zero it's not even funny anymore. I'd let yu fail every exam if I saw something like that.
Well I actually did let students fail there assignments when I corrected them for a semester.[/QUOTE]
I was laughing as I read it.
Hello guys.
Can anyone give me real life example for multi dimensional linear algebra stuff?
I know 3D in and out, but what about more dimensions, like 4, 5, 6, N?
What is the use of eigen vectors in this case? Linear system solving?
Anyone can explain?
[QUOTE=Fourier;47528437]Hello guys.
Can anyone give me real life example for multi dimensional linear algebra stuff?
I know 3D in and out, but what about more dimensions, like 4, 5, 6, N?
What is the use of eigen vectors in this case? Linear system solving?
Anyone can explain?[/QUOTE]
You're not Fourier! He'd know that!
General relativity is all about 4-dimensional manifolds, where you have 4-dimensional tangent vector spaces at each point.
Phase space in classical mechanics is usually pretty big. Twice the dimensionality of the system times the number of particle for point particles (e.g. 3 dimensions for possible positions in 3D space and another 3 dimensions for possible momenta means 6 dimensional phase space, just for a single point particle) for and even bigger for rigid bodies because then you have rotations.
Quantum mechanics is all about linear algebra in infinitely many dimensions! Square integrable functions form a normed vector space of infinitely many dimensions.
[editline]14th April 2015[/editline]
For the quantum case, all observable quantities are operators on this infinite-dimensional vector space. The eigenvalues of the operator are the possible values a measurement of that observable will return. The eigenvectors are determinate states: states which will return a definite value of the observable you're measuring, the associated eigenvalue.
Wait, what the hell was I saying yesterday? Phase space is not a vector space. However, Hamiltonian mechanics does deal with functions to and from the tangent space of points in phase space, which [I]will[/I] be a vector space like the one I described.
[editline]15th April 2015[/editline]
Ooh, ooh, I've got another good one: Lie algebras. Lie groups encode information about the smooth symmetries of manifolds. The Lie algebra of a Lie group is a vector space (isomorphic to the tangent space of the identity element in the Lie group) which reflects a lot of a structure of the Lie group but is simple to work with, being a vector space in all. The dimensionality of the Lie algebra is limited only by the number of classes of smooth symmetries of the manifold in question.
[editline]15th April 2015[/editline]
Here's a pretty simple one that should demonstrate both that vectors can be abstract (not obviously little arrows) and vector spaces can be dimensionally big: n-th order polynomials form an n-dimensional vector space under usual addition. This is one you can probably just pick up a pen and prove the vector space axioms for.
Heh I know that polynomial (functions) exist in vector space with all the vector space properties :v:.
Cool stuff about quantum mechanics though! This means qubit has infinitely many dimensions, but one measurement is actually one eigen vector? Or is this not about quantum computing?
Also I looked into Lie algebra many times but I find it boring, I don't know why.
I figured you might know the polynomial one, it's a pretty common early example in a decent linear algebra course.
I don't know much about qubits or quantum computing. I was talking about regular old quantum mechanics. So e.g. measuring a particle's position, momentum, angular momentum, or any other observable quantity, will cause the system to collapse from its current state to an eigenstate (that is, an eigenvector) of the operator associated with that observable quantity with some probability, and the measurement will return the eigenvalue for the eigenstate. So maybe I measure the momentum of a particle. The particle's wave function collapses into the state with, say, 2 whatever-the-fuck-units of momentum and my measurement shows 2 whatever-the-fuck-units.
I find Lie algebras a bit boring too tbh. I think Lie groups and smooth manifolds by themselves are much more interesting.
[QUOTE=JohnnyMo1;47532035]I figured you might know the polynomial one, it's a pretty common early example in a decent linear algebra course.
I don't know much about qubits or quantum computing. I was talking about regular old quantum mechanics. So e.g. measuring a particle's position, momentum, angular momentum, or any other observable quantity, will cause the system to collapse from its current state to an eigenstate (that is, an eigenvector) of the operator associated with that observable quantity with some probability, and the measurement will return the eigenvalue for the eigenstate. So maybe I measure the momentum of a particle. The particle's wave function collapses into the state with, say, 2 whatever-the-fuck-units of momentum and my measurement shows 2 whatever-the-fuck-units.
I find Lie algebras a bit boring too tbh. I think Lie groups and smooth manifolds by themselves are much more interesting.[/QUOTE]
Oh, I am not really expert in quantum anything.. but I kind of imagine eigenstates. I find eigenvectors quite simple.
-Complete snip: wrong thread entirely.-
Dear math thread,
What [i]exactly[/i] does dx mean in "integral f(x) dx" ? It doesn't seem to be just a bit of notation to show what you're integrating over, because I've seen people do some pretty weird stuff to it. Simplest example, multiplying dx by dy/dx to change the variable of integration. If it were just notation, it would seem as if you couldn't do that, because dx isn't "actually there". But it acts like it is.
What kind of object is "dx"? Can it exist "in the wild", i.e. outside of any integral or derivative? Can an integral exist with no "dx" in it, and would that make it an integral with no variable to integrate over?
The simplest explanation I can remember from school is this:
While measuring area under a curve with an integral, what you are basically doing is summing infinitesimally small rectangles with a height of f(x) and a width of some infinitesimally small quantity. So the integral basically means "the sum of the areas in the rectangles with height f(x) and dx", which is the sum of their products.
So basically you are doing f(x1)*dx1 + f(x2)dx2 + ... + f(xi)dxi, and because all the dx are infinitesimally small, you can call them all dx, so it's the sum of f(x1)dx + f(x2)dx + ... f(xi)dx.
I got the impression that he's looking for something a bit deeper than that.
[URL="http://math.stackexchange.com/questions/21199/is-frac-textrmdy-textrmdx-not-a-ratio"]The mathematics stack exchange [/URL] has nice related questions & answers, also [URL="http://math.stackexchange.com/questions/23902/what-is-the-practical-difference-between-a-differential-and-a-derivative"]this one[/URL].
There are a few ways to think of what's in the integrand (e.g. measure theoretic, differential forms) but I think the way you'll see most is a bit more informal. d(something) outside an integrand can have meaning in relation to other infinitesimal things, but to get real finite answers you'll need to integrate or differentiate. It's kind of an indicator that you're working with a relation which holds exactly, but only locally. Here's an example:
We have a function of two variables, f. A relation you'll often see in e.g. calculus of variations is
[img]http://i.imgur.com/DfFR4Vx.jpg[/img]
We can interpret it as saying "if you go from a point (x,y) to another point (dx,dy), the value of f changes by dx times the slope in the x direction plus dy times the slope in the y direction." This is only approximately true for any smooth two-variable function f, but it is exactly true for a plane, such as the tangent plane at the point (X,y). So d(something) is a sort of shorthand for "a value so small we can take first order approximations (or approximations of any finite order) as being exactly true." So these expressions mean something about a smooth function locally, then integration allows us to turn it into useful statements globally. Leibniz notation is still very popular because it has the uncanny ability to make not-so-obvious statements obvious. df/dx is not really a ratio... but rules like the chain rule show us why it can be treated like one in certain circumstances, and Leibniz notation suggests that.
Oh ok. That makes sense.
I accidentally drew the most aesthetically pleasing '2' in my entire life today and there's no way it's ever happening again.
[IMG]https://fbcdn-sphotos-c-a.akamaihd.net/hphotos-ak-xaf1/v/t1.0-9/11148441_10205789339366767_1512175340217601480_n.jpg?oh=87f0c788d7076997a88a37e2eda9441b&oe=55D86E96&__gda__=1440599842_7ee68ea38fca8794cd852ed39ec3dbbb[/IMG]
Sorry, you need to Log In to post a reply to this thread.