• Mathematician Chat v. 3.999...
    1,232 replies, posted
What does dt, dx, dy, df mean? Those are differentials. I know what does &#916;x, &#916;y mean, it's just "finite change". Are dt "infinite change"? It's bothering me because of total derivatives, which is <grad u, dv>, where: - u is R^N -> R function - dv is vector [dx1, dx2, dx3.. dxn] So what is dx1, dx2? Are those just simbols to give meaning to total derivative just like basis vectors i,j,k?
[QUOTE=Fourier;49221750]What does dt, dx, dy, df mean? Those are differentials. I know what does &#916;x, &#916;y mean, it's just "finite change". Are dt "infinite change"? [/QUOTE] More like infinitesimal, i.e. as small as you want but not zero; infinitely small (although I find that terminology not great). Perhaps review the definition of a tangent line to a curve. You will see they are necessary to find the slope at a single point, otherwise it will be the average slope of some finite interval. I struggled with epsilon-delta proofs and infinitesimals for quite a while myself. [QUOTE=Fourier;49221750]What does dt, dx, dy, df mean? Those are differentials. It's bothering me because of total derivatives, which is <grad u, dv>, where: - u is R^N -> R function - dv is vector [dx1, dx2, dx3.. dxn] So what is dx1, dx2? Are those just simbols to give meaning to total derivative just like basis vectors i,j,k?[/QUOTE] For some multivariate function the dx_i would then be an infinitesimal change in the direction associated with the coordinate x_i (I think?? I never had multivariable calculus explicitly) There's also some really deep and elegant way of explaining it with differential calculus as JohnnyMo1 will probably point out in a minute:v: But I'm not comfortable with those notions myself (yet, hopefully). I think my explanation is still the "Leibniz" way, and it might not be sufficient for you. Also see [url=http://math.blogoverflow.com/2014/11/03/more-than-infinitesimal-what-is-dx/]1[/url], [url=http://mathoverflow.net/questions/73492/how-misleading-is-it-to-regard-fracdydx-as-a-fraction]2[/url], [url=http://mathoverflow.net/questions/10574/how-do-i-make-the-conceptual-transition-from-multivariable-calculus-to-different]3[/url] and a whole lot more. You might get more confused too. [B]Edit:[/B] woah that first link is eye-opening, Caroll was a bit brief IMO...
This is an excellent question, and it made me think harder about this than I have in years and I think I understand it a little better for you having asked. [QUOTE=Number-41;49222735]More like infinitesimal, i.e. as small as you want but not zero; infinitely small (although I find that terminology not great). Perhaps review the definition of a tangent line to a curve. You will see they are necessary to find the slope at a single point, otherwise it will be the average slope of some finite interval. I struggled with epsilon-delta proofs and infinitesimals for quite a while myself. For some multivariate function the dx_i would then be an infinitesimal change in the direction associated with the coordinate x_i (I think?? I never had multivariable calculus explicitly) There's also some really deep and elegant way of explaining it with differential calculus as JohnnyMo1 will probably point out in a minute:v: But I'm not comfortable with those notions myself (yet, hopefully). I think my explanation is still the "Leibniz" way, and it might not be sufficient for you. Also see [url=http://math.blogoverflow.com/2014/11/03/more-than-infinitesimal-what-is-dx/]1[/url], [url=http://mathoverflow.net/questions/73492/how-misleading-is-it-to-regard-fracdydx-as-a-fraction]2[/url], [url=http://mathoverflow.net/questions/10574/how-do-i-make-the-conceptual-transition-from-multivariable-calculus-to-different]3[/url] and a whole lot more. You might get more confused too. [B]Edit:[/B] woah that first link is eye-opening, Caroll was a bit brief IMO...[/QUOTE] I really like that first link, never seen it before. I read stack exchange all the time but I didn't know about that community blog. I'll have to start reading it. Thinking about "infinitesimal change" works. You start with single-variable calculus and the derivative is a limit as as the change between two points on a function get as small as you can make them, so it's obvious why that works. It's built out of such ideas. I'm gonna aim for the more sophisticated stuff like in the first link that Number-41 posted and try to make it intuitive. So I think the "sophisticated insight" that helped me most is this: any time you're talking about "infinitesimal" anything, you're really talking about vectors. The sophisticated way to think of df, or dx, or dy, etc. is as a covector field. The Spivak quote in that first link touches on it: "Eventually it was realized that the closest one can come to describing an infinitely small change is to describe a direction in which this change is supposed to occur, i.e., a tangent vector." If someone asked you to draw a 1 unit change in the x-direction, you could just draw a little line segment from 0 to 1 on the x-axis, but what if they want you to show an infinitesimal change in the x-direction? It'll be smaller than any finite amount. It makes sense to use a vector, you just have to say, "Well it goes this way." Now if someone wants an infinitesimal quantity that's twice as big, you can use a vector that has twice the magnitude. That's a bit rough, but it's the beginning of the gist. This kind of thing pops up all over in math and physics. It's all facets of "linearization." If you look very closely at a spot on the graph of a smooth function, it looks pretty close to a straight line. The derivative contains information about the best linear approximation to the function at the point. It's no accident that "linear algebra" deals extensively with vectors. One place this idea comes up in physics is symmetry. We might say a circle looks the same if we rotate it by 37 degrees. Physicists like to talk about how a system behaves under an infinitesimal symmetry, like a rotation by a very very small angle. Mathematicians understand this idea differently. Symmetries are naturally described by a smooth object called a Lie group, and what a physicist might be content to call an infinitesimal transformation, a mathematician knows is a statement about the Lie [I]algebra[/I]. Every Lie group has an associated Lie algebra, and it's a vector space! Infinitesimal symmetry transformations are described by vectors. Hope that wasn't too much of a digression. Here's hopefully an illuminating example (and a chance to doodle on my new tablet). Imagine we attach a vector in R^2 to every point along the real line (ignoring vectors which are completely horizontal for the moment), like so: [IMG]http://i64.tinypic.com/14w839y.png[/IMG] We're gonna assume that the choice of vector varies "smoothly" (deftly ignoring how thorny that is). The vectors determine a function like the function graphed in black up there, call it y(x), up to addition by a constant. This shouldn't seem to surprising if you've seen integration. The derivative of a function is almost enough info to reconstruct the function by integration, again up to addition by a constant. So what are dx and dy? They're essentially like a basis in which we represent the vectors we used to reconstruct that function. Say we have an expression for the derivative of the function, like dy/dx = cos(x). We know what function has derivative equal to cos(x), it's sin(x) + c. Let's think of this instead as being dy = cos(x) dx. Pick some values for x. At x = 0, this relation is dy = dx says, "the tangent vector points as much in the x direction as in the y. So the tangent vector is of the form <1,1> (I think as long as you're consistent with the normalization of your vectors, it doesn't matter which vector you pick. It could be <2,2> or <-1,000,-1,000>, but I'm not 100% on that right now). At x = pi/2, the tangent vector will be along <1,0>, at x = pi, it will be along <1,-1>, and so on and so forth. If you plot the vectors along R like I did above, it clearly matches the sine function. I think I might be veering a little far back towards the elementary, so what I'm getting at is that symbols like dy and dx are giving you information about the direction that the tangent vector to a function points in. And this is what differential geometry is all about: smooth spaces are what we might think of as a surface or a volume or whatever, but the important fact is that to be smooth, they must come equipped with a vector space attached to every point. A "smooth" choice of one vector at each point (or covector, though the distinction is subtle and sometimes they're basically interchangeable) can give you information about smooth functions on the space.
Hey guys, that is sweet explanation! So let me check if I got it right: Differentiation/derivative in <1,0> direction/path is vector <1,0> * (df/dx). Then differentiation/derivative in direction <a,b> must be a*<1,0>* (df/dx) + b*<0,1>* (df/dy), correct? Or should vector <1,1> be normalized first? I will come back later, study harder, I need to go to job now.
[QUOTE=JohnnyMo1;49225404] :words: [/QUOTE] Does this extend nicely to the Riemann-Stieljes integral as well? ie [img]http://i.imgur.com/RDd4cdv.gif[/img]
[QUOTE=Wunce;49226470]Does this extend nicely to the Riemann-Stieljes integral as well? ie [img]http://i.imgur.com/RDd4cdv.gif[/img][/QUOTE] Yes. If you take a look at any good differential geometry text, it'll cover integration on manifolds. Differential forms (of which covector fields are a specific example) can be thought of as "the things which we can integrate over."
Ok, I did first step: lim [&#916;x -> 0] (&#916;y/&#916;x) = dy/dx This must be correct, yes? Continuing now...
In the most simplest of terms, you're expressing the limit definition of a derivative.
Yeah I know. It's just that when I see equation for example xdx + (2x + y)dy, I don't know really what it means. I will study it hard. Just one mindfuck exercise I arrived at: With three steps of bisection calculate approximate value of Cube Root of 20; CubeRoot[20]. Isn't bisection.. for roots? What the heck really. [editline]4th December 2015[/editline] Oooh wait, I think it's CubeRoot[20] - x = 0, gotta find x. No what I just tripped myself again.. Ok, I woke up and solved it in 5 seconds.. it's so unimaginably simple.. 1. CubeRoot[20] = x 2. 20 = x^3 3. x^3 - 20 = 0 4. Now find root with bisection...
I've got an Engineering project and we're trying to get two gears to fit together. Basically, we're building a Horizontal Axis Wind Turbine, and we're going to have these two gears ([URL="http://uk.rs-online.com/web/p/spur-gears/5216193/"]this [/URL]and [URL="http://uk.rs-online.com/web/p/spur-gears/5217174/"]this[/URL]) in our assembly. Would they work together? I have no idea how pitch diameter works, but I feel like those gears won't match up and they'll just grind against each other. Also, would it be a smart idea to have both gears made of metal, instead of one metal (harder material) and one plastic (softer material) to avoid excessive wear and tear? I'm worried at high RPMs, the plastic teeth will just get shredded.
[QUOTE=loopoo;49251586]I've got an Engineering project and we're trying to get two gears to fit together. Basically, we're building a Horizontal Axis Wind Turbine, and we're going to have these two gears ([URL="http://uk.rs-online.com/web/p/spur-gears/5216193/"]this [/URL]and [URL="http://uk.rs-online.com/web/p/spur-gears/5217174/"]this[/URL]) in our assembly. Would they work together? I have no idea how pitch diameter works, but I feel like those gears won't match up and they'll just grind against each other. Also, would it be a smart idea to have both gears made of metal, instead of one metal (harder material) and one plastic (softer material) to avoid excessive wear and tear? I'm worried at high RPMs, the plastic teeth will just get shredded.[/QUOTE] im not sure about the gears offhand, but the reason why you use plastic vs metal gears is for protecting the things they're connected to like if you have an expensive motor connected to a transmission, if something siezes up, you don't break the expensive bit, you strip a gear instead
I really hate it when I read math papers and they don't fucking specify what some symbols stand for. What an assholes. [editline]6th December 2015[/editline] So I was reading some random paper.. [url]http://www.math.tamu.edu/~sottile/teaching/10.S/Ch2.pdf[/url] And I notice it used HSBC logo for ending the proof (page 3) [IMG_thumb]http://contactdir.uk/wp-content/uploads/2015/07/hsbc1.jpg[/IMG_thumb] What the hell really :v:.
gotta love it when the thing you need to find is burried inside 3 equations, some of which are to odd powers like ^.8 or .33 this engineering problem people were using solver and getting weird answers because they weren't constraining solver to logical values like the diameter of the thing we're finding has to be greater than 0 and the previous diameter, or the velocity of the fluid has to be greater than 0 but less than 10 m/s because that'd be outrageous. sometimes modeling a system you forget what should look reasonable [editline]6th December 2015[/editline] the great thing about engineering problems in this class is that it takes 5 hours worth of work to find something that one flipping sensor at the inlet could have found and the problem statement even states that we are only doing this because the inlet sensor has failed
Yeah electronics is serious bussines, I had quite headaches with it. Especially when you solder your own stuff and one pin is .. not touching. Also finally figured out fucking differentials. It's simple! If we assume: &#916;x = dx Then we can say &#916;y ~= dy. Lets define function f(x) = cos(x^2 -1). We can calculate some &#916;x like x2 - x1. Let x2 = 2.03 and x1 = 2, so &#916;x = 0.03. Next, &#916;y can be calculated like this: &#916;y = f(x2) - f(x1) = f(2.03) - f(2) = -0.00979341808 OR We can use differential to calculate &#916;y. Lets evaluate differential: df(x) = dcos(x^2 - 1) = -sin(x^2-1) d2x = -2sin(x^2 -1)dx Let's plug in x1 and &#916;x: -sin(2^2 -1) * 0.03 = -0.00846720048 Ok, in this case error is quite big, but I hope you guys see my point.
secant line approximation versus tangent line actual.
I'm a bit confused about tensors. Carrol states that you can leave "the argument list" incomplete, i.e. if you have a (1,1) tensor and you let it act on a (1,0) tensor, you get again a (1,0) tensor (i.e. your typical matrix-vector product that produces a new vector). However, if I write it down explicitly (x being the ordinary product): [IMG]http://i.imgur.com/ddgVUGy.png[/IMG] Would it be correct to say that the left hand side is not a tensor at all, but rather a geometric object that is coordinate/basis independent? The right hand side seems to imply that, because it has no "net" amount of indices. And yet the left hand side also seems to say that it just lets a tensor act on a vector, so the output should be a vector, which it sort of is but not if you view it purely from an "index" perspective. Is it because you can refer to a vector as "an independent geometric entity" and at the same time you often refer to the components of a vector as "a vector"? Is it the same thing with tensors? So do tensors only exist as objects with certain indices that transform in a specific way, or are the indices a consequence of choosing a basis, which is not necessary to have a tensor object? Or perhaps the left-hand side is a multilinear map, in this case one that -when acting on a dual vector- produces a real number? This does seem to be compatible with the right-hand side. My question might seem unclear, it's just that there seem to be multiple perspectives and it is not immediately clear which one is useful in which context (conceptual, or computational, ...)
Do any of you know about NJIT's math department and its reputation in the world. Was talking with a friend who is ahead of me and the way that she was talking about it made it seem much more impressive that I thought it was.
[QUOTE=Call Me Kiwi;49369273]Do any of you know about NJIT's math department and its reputation in the world. Was talking with a friend who is ahead of me and the way that she was talking about it made it seem much more impressive that I thought it was.[/QUOTE] Grad school rankings has it just shy of the top 100. Your life won't be sunk if you go but it's not turning any heads. [editline]22nd December 2015[/editline] [QUOTE=Number-41;49368945]I'm a bit confused about tensors. Carrol states that you can leave "the argument list" incomplete, i.e. if you have a (1,1) tensor and you let it act on a (1,0) tensor, you get again a (1,0) tensor (i.e. your typical matrix-vector product that produces a new vector). However, if I write it down explicitly (x being the ordinary product): [IMG]http://i.imgur.com/ddgVUGy.png[/IMG] Would it be correct to say that the left hand side is not a tensor at all, but rather a geometric object that is coordinate/basis independent? The right hand side seems to imply that, because it has no "net" amount of indices. And yet the left hand side also seems to say that it just lets a tensor act on a vector, so the output should be a vector, which it sort of is but not if you view it purely from an "index" perspective. Is it because you can refer to a vector as "an independent geometric entity" and at the same time you often refer to the components of a vector as "a vector"? Is it the same thing with tensors? So do tensors only exist as objects with certain indices that transform in a specific way, or are the indices a consequence of choosing a basis, which is not necessary to have a tensor object? Or perhaps the left-hand side is a multilinear map, in this case one that -when acting on a dual vector- produces a real number? This does seem to be compatible with the right-hand side. My question might seem unclear, it's just that there seem to be multiple perspectives and it is not immediately clear which one is useful in which context (conceptual, or computational, ...)[/QUOTE] I think you've got the right idea (in part, that there are multiple ways to think of it!). A tensor is an object which has basis-independent existence, but we usually choose a basis to represent it and do computations, and in fact the collection of tensors of a certain type will form a vector space. Usually I find the "multilinear map" idea the easiest to think about and use for geometric understanding. If an (m,n) tensor can act on a collection of n vectors and m covectors (that's the way the convention goes, right? I forget :v:) to map to scalars, and you have it act on, say, one vector, now you've got an object which is now "waiting" for n-1 more vectors and m more covectors to make a real number, now you have an (m,n-1) tensor. You'd have to prove it's multilinear to prove it rigorously but hopefully it should seem pretty intuitive. I'm a little confused about some of your questions, so if you can rephrase what's still unclear I can try to answer them.
Springer is giving away math ebooks that were published more than 10 years ago for no apparent reason. A ton of great ones in there. [url]http://link.springer.com/search?facet-series=%22136%22&facet-content-type=%22Book%22&showAll=false[/url]
[QUOTE=JohnnyMo1;49370403]Grad school rankings has it just shy of the top 100. Your life won't be sunk if you go but it's not turning any heads. [editline]22nd December 2015[/editline] I think you've got the right idea (in part, that there are multiple ways to think of it!). A tensor is an object which has basis-independent existence, but we usually choose a basis to represent it and do computations, and in fact the collection of tensors of a certain type will form a vector space. Usually I find the "multilinear map" idea the easiest to think about and use for geometric understanding. If an (m,n) tensor can act on a collection of n vectors and m covectors (that's the way the convention goes, right? I forget :v:) to map to scalars, and you have it act on, say, one vector, now you've got an object which is now "waiting" for n-1 more vectors and m more covectors to make a real number, now you have an (m,n-1) tensor. You'd have to prove it's multilinear to prove it rigorously but hopefully it should seem pretty intuitive. I'm a little confused about some of your questions, so if you can rephrase what's still unclear I can try to answer them.[/QUOTE] Yeah it was difficult to word it. I'm now at the curvature chapter and it doesn't seem to bother me anymore, in terms of computation it's all quite clear :v:
[QUOTE=Number-41;49406225]Yeah it was difficult to word it. I'm now at the curvature chapter and it doesn't seem to bother me anymore, in terms of computation it's all quite clear :v:[/QUOTE] That's one nice thing about physics tensor notation. The manipulations are often very simple, but it's still a good idea to have a geometrical idea of what you're doing. I got through my GR class without any sense of what a tensor really is because shuffling indices around is pretty easy. :v:
I keep my mind at peace by telling myself that a tensor is a thing that transforms like a tensor :v: Also that it can spit out real numbers if given the appropriate other tensors.
I'm looking for a good book on fourier transforms (and laplace transforms) to use in tandem with the designated book. Any ideas?
[QUOTE=Number-41;49407094]I keep my mind at peace by telling myself that a tensor is a thing that transforms like a tensor :v:[/QUOTE] Nooo don't do it don't ever use those words again
Really stupid question: Why is [IMG]https://i.gyazo.com/0e7a6a4fcfd76c9000a6d11340558119.png[/IMG] the same as [IMG]https://i.gyazo.com/44cf2c8e587a2aac34b80ef28cbd87d8.png[/IMG] I've been leaving my answers in the first form, and I'm worried I may lose marks for it. I was confused as fuck when the solution sheet showed a different answer to mine, but then I realised it's just a more exact form. I'm just not sure how you go from my answer to the exact one. [editline]1st January 2016[/editline] For some reason the second image has a negative sign, it shouldn't. I dunno why Wolfram Alpha is making it negative.
Because the complex exponential function is (2*pi*i)-periodic? In other words, e^(2*pi*i) = 1. Euler's formula makes it especially obvious: e^(i*x) = cos(x) + i*sin(x) So don't worry about it, both answers are correct.
Oh damn I completely forgot that the i^2 turns into a -1, whoops. Makes so much more sense now, thanks for explaining. It's easy to forget you can do that when the power is a long string of stuff, as opposed to just a ^2
Math guys, I need advice and guidance. I am on my 4th year of an electrical engineering degree. Just half a year and I will get my degree and sail into the real world. I have taken a calculus course, linear algebra, discrete mathematics, probability theory and applied statistics. I am mainly a programmer and math had helped me a lot in that field. I will certainly be a programmer when I graduate (because the degree is useless and I don't like it and I already have experience working as a coder etc). I feel like despite all the courses I don't understand math. I have been given an overview of many things, but forgot them all. I can tell which task falls into what department and where to search, but I feel like I need more. I am fascinated by numbers and I have always felt like math makes you see the world differently when you get it. You wouldn't believe how emotional I got when i learned about how probability theory is used in industry mass production every day. I also feel like a half-educated programmer because I don't know math. I sometimes fidgit where I shouldn't : get coordinates right on a 2d pane, working with vectors. So here comes my question: is it possible to properly self-educate myself in algebra, discrete mathematics and probability theory? If so, how long is it going to take? What online courses and books should be of help? Is it worth it at all or am I better off learning practical things now that I am 5 minutes from the struggle to earn a living? That's a lot of questions, thanks in advance
It's definitely possible to self-educate. You can self-educate almost anything with enough determination. MIT Open Courseware is an amazing resource. Some have video lectures (which I think are the best way to self-learn) but even with the ones that don't, you generally get a book recommendation, a schedule of readings, homework problems and hopefully solutions. That will give you a pre-built curriculum to guide you.
[QUOTE=JohnnyMo1;49494607]It's definitely possible to self-educate. You can self-educate almost anything with enough determination. MIT Open Courseware is an amazing resource. Some have video lectures (which I think are the best way to self-learn) but even with the ones that don't, you generally get a book recommendation, a schedule of readings, homework problems and hopefully solutions. That will give you a pre-built curriculum to guide you.[/QUOTE] I'm kinda like the guy you responded to. I'm also a programmer, and I frequently find myself programming things that involve math. I'd like to make more sense of the books and papers I read because it all looks like gibberish to me. Thanks for the recommendation.
Sorry, you need to Log In to post a reply to this thread.