• Mathematician Chat v. 3.999...
    1,232 replies, posted
[QUOTE=agentalexandre;47943483]Can anyone give a brief explanation as to what the members of an infinite Cartesian product are from an axiomatic set theory point of view? I'm used to seeing infinite products indexed over the natural numbers (from basic topology) where elements are just sequences. I'm having a hard time visualising the infinite product over a general indexing set (where members are apparently functions whose domain is the index set).[/QUOTE] It is certainly harder to think of the points. You can't write them properly as sequences anymore. You [I]could[/I] well-order the indexing set. Then there's a minimum in the indexing set and everything has an immediate successor, but it's got weirder properties than indexing over the natural numbers. Not every point is a finite number of steps away from the minimum as they are in the natural numbers, for instance. Best solution is to not write the points down with all the components as you might with a sequence so much. You can still write out a component for any index value explicitly if you need.
[QUOTE=JohnnyMo1;47946382]It is certainly harder to think of the points. You can't write them properly as sequences anymore. You [I]could[/I] well-order the indexing set. Then there's a minimum in the indexing set and everything has an immediate successor, but it's got weirder properties than indexing over the natural numbers. Not every point is a finite number of steps away from the minimum as they are in the natural numbers, for instance. Best solution is to not write the points down with all the components as you might with a sequence so much. You can still write out a component for any index value explicitly if you need.[/QUOTE] Does this require choice?
[QUOTE=JohnnyMo1;47946382]It is certainly harder to think of the points. You can't write them properly as sequences anymore. You [I]could[/I] well-order the indexing set. Then there's a minimum in the indexing set and everything has an immediate successor, but it's got weirder properties than indexing over the natural numbers. Not every point is a finite number of steps away from the minimum as they are in the natural numbers, for instance. Best solution is to not write the points down with all the components as you might with a sequence so much. You can still write out a component for any index value explicitly if you need.[/QUOTE] Ah alright, cheers for the explanation. [editline]13th June 2015[/editline] [QUOTE=yawmwan;47946441]Does this require choice?[/QUOTE] You don't specifically need the axiom of choice to define the infinite product. You just need it to guarantee that that infinite product isn't empty (which could be the case without choice even if all the sets you are taking the product over are non-empty)
[QUOTE=yawmwan;47946441]Does this require choice?[/QUOTE] To well-order an arbitrary set, yes. [editline]13th June 2015[/editline] [QUOTE=agentalexandre;47946538]Ah alright, cheers for the explanation.[/QUOTE] Munkres' topology book does a very good job building it up. Work through the relevant exercises if you want some practice thinking about it.
I need motivation to do this antiderivative class, so can someone give me some concrete examples of uses in both a 'regular day to day' and computer games or programming environment. The reason I lose interest is because it looks daunting, and I do not see an immediate use for it.
What is this shit minimal polynomial [editline]13th June 2015[/editline] [QUOTE=Pretermit;47947107]I need motivation to do this antiderivative class, so can someone give me some concrete examples of uses in both a 'regular day to day' and computer games or programming environment. The reason I lose interest is because it looks daunting, and I do not see an immediate use for it.[/QUOTE] Antiderivative is integration... it's immensely useful. I mean it, it's really useful, it's used everywhere with math/physics. Yes, even games use it (not all of course). Length of function curve? Integral Area under function? Integral Physics simulation? Process called Integrating Electric charge? Integral Kinetic energy? Integral Potential energy? Integral
[QUOTE=Fourier;47948763]What is this shit minimal polynomial[/QUOTE] Are you asking what a minimal polynomial is?
[QUOTE=Pretermit;47947107]I need motivation to do this antiderivative class, so can someone give me some concrete examples of uses in both a 'regular day to day' and computer games or programming environment. The reason I lose interest is because it looks daunting, and I do not see an immediate use for it.[/QUOTE] Antiderivatives (and hence integrals) are super useful. Say you have a function giving you the of the velocity of a car at any point in time. If you find the antiderivative, that plus one other piece of information (a boundary condition, like "the car starts are position zero") will tell you where the car [I]is[/I] at any point in time. Same thing if you know the acceleration, only this time you have to find the antiderivative of the antiderivative, and you need two pieces of extra info (starting position and starting velocity). This is just the basic "physical quantities" example that you usually see first, there are tons of other applications. You can calculate the area of all sorts of weird shapes using integration.
[QUOTE=agentalexandre;47949383]Are you asking what a minimal polynomial is?[/QUOTE] Yes, but I figured it out, it's simpler than it looks.
How would any of you describe, what is wavelet? Can I use it to extract data from continuous stream of music?
[QUOTE=Fourier;47955662]How would any of you describe, what is wavelet? Can I use it to extract data from continuous stream of music?[/QUOTE] I don't know about the second question, but it looks like you can define them as a product of a periodic function and a function of bounded support (or more generally just a function which falls off to zero at infinity). What physicists might call a wave packet.
If I transform one vector with FFT into frequency domain and all wavelets into frequency domain then I can do scalar product between wavelets and vector inside frequency domain correct?
As long as your "vector" is one in the same Hilbert space as the wavelet (Wiki seems to imply we want wavelets to be square integrable), sure.
I am having some confusion in finding the inverse of this function: [IMG]http://latex.codecogs.com/gif.latex?f(x) %3D x^x[/IMG] [IMG]http://latex.codecogs.com/gif.latex?\frac{df}{dx} %3D (ln(x) + 1)x^x[/IMG] [IMG]http://latex.codecogs.com/gif.latex?g(y) %3D f^{-1}(y)[/IMG] [IMG]http://latex.codecogs.com/gif.latex?\frac{dg}{dx} %3D \frac{1}{(ln(x) + g(x))}[/IMG] So I decided to find a complex function to evaluate this for complex arguments. I won't go through the whole thing to show where I got this from, although I do have it all written down to send to people if they are interested. It is as follows: [img]http://i.imgur.com/XCl26Rk.gif[/img] [IMG]http://latex.codecogs.com/gif.latex?g(a%2Cb) %3D tan(\frac{b}{2}ln(a^2 + b^2) - \frac{a}{2}tan^{-1}(\frac{b}{a}))[/IMG] [IMG]http://latex.codecogs.com/gif.latex?(a+bi)^{(a+bi)} %3D \sqrt[]{f(a%2Cb)}e^{i\cdot g(a%2Cb)}[/IMG] Does anyone have any idea of how to find an inverse function? I will try and work out some branch cuts to calculate the domain of the inverse function. And just some extra stuff where you can imagine the noise I make when I see it works out as it should. [IMG]http://latex.codecogs.com/gif.latex?f(a%2C0) %3D e^{a\cdot ln(a^2)}[/IMG] [IMG]http://latex.codecogs.com/gif.latex?g(a%2C0) %3D 0[/IMG] [IMG]http://latex.codecogs.com/gif.latex?(a+0i)^{a+0i} %3D \sqrt{f(a%2C0)} %3D \sqrt{e^{ln(a^{2a})}} %3D a^a[/IMG]
[QUOTE=JohnnyMo1;47966929]As long as your "vector" is one in the same Hilbert space as the wavelet (Wiki seems to imply we want wavelets to be square integrable), sure.[/QUOTE] Thanks, because I was not really sure!
Autocorrelation, I always thought you were useless. Now today I was enlighted.., autocorrelation can be used for function period detection! How sick is that?
[QUOTE=Fourier;47933820]Khan Academy, Google, Quora, Math Overflow, Math.StackExchange, Facepunch [editline]11th June 2015[/editline] By the way, anyone knows how to make upper triangle matrix with help of gauss? A = original matrix P, P^-1 = change of basis matrices T = triangle matrix To get something like that? A = P T P^-1[/QUOTE] I don't understand; are you asking what those matrices are? The form you give above is for the diagonalization of a matrix A. T is a [I]diagonal[/I] matrix (whose entries are A's eigenvalues). To formulate A as a product of triangular matrices you can use an LU/PLU decomposition.
[QUOTE=PopLot;47986771]I don't understand; are you asking what those matrices are? The form you give above is for the diagonalization of a matrix A. T is a [I]diagonal[/I] matrix (whose entries are A's eigenvalues). To formulate A as a product of triangular matrices you can use an LU/PLU decomposition.[/QUOTE] Yeah I see. I was actually asking, if it is possible to make upper triangular matrix with help of gauss transformations (row swap etc) where the upper triangular matrix keeps all eigenvalues. But of course, if matrix is full rank, then T will be diagonal. But no, it's not possible to do it with Gauss so that is. As for LU decomposition, it is waiting for us next year I think.
[QUOTE=Fourier;47987809]Yeah I see. I was actually asking, if it is possible to make upper triangular matrix with help of gauss transformations (row swap etc) where the upper triangular matrix keeps all eigenvalues. But of course, if matrix is full rank, then T will be diagonal. But no, it's not possible to do it with Gauss so that is. As for LU decomposition, it is waiting for us next year I think.[/QUOTE] Oh. Well you use Gaussian row operations to do LU/PLU decompositions.
[QUOTE=PopLot;47987935]Oh. Well you use Gaussian row operations to do LU/PLU decompositions.[/QUOTE] Yeah I saw that when I was google-ing around what LU is really. So LU is used for solving linear systems instead of Gaussian Elimination because of possible big errors in Gaussian?
It also speeds up algorithms because it's faster than a complete matrix inversion.
[QUOTE=Fourier;47988019]Yeah I saw that when I was google-ing around what LU is really. So LU is used for solving linear systems instead of Gaussian Elimination because of possible big errors in Gaussian?[/QUOTE] The initial Gauss ~ 2/3*n^3 The forward/backward substitution ~ n^2 + Matrix inversion besides being slower is also not as stable numerically
Just had my first calc exam (well my second first exam) and some of this shit is still really tough for me to grasp. The professor is making it a lot easier than the last, but some of it I'm still foggy with
You will get better over time, no doubt. I sucked at first exam.
I needed to grasp complex exponential in time for signal theory because that entire thing is hugely based on that so I launched matlab. Instead of learning much useful I ended up fucking around and figuring out how to animate function plots [vid]http://sinus.cz/~milan/SINESS.mp4[/vid] I hate studying arghh
You need to study this? [code] e^(i 2π φ) = cos φ + i sin φ [/code] Well study first Taylor series and you will see this formula pop up somewhere and why it works
I am not primarily studying maths. I have passed this semester's analytic calculus (mostly integral calculus of multiple variables) and THANKS FUCK that's hopefully the last exam in maths I ever had to take. This kind of complex exponential stuff is very important in signal theory though. I am mostly dealing with examination of LTI (linear, time invariant) systems right now, stability, frequency characteristics of a system, moving between differential equations, state-space description, diagrams...
Is your signal theory class doing both continuous and discrete time?
[QUOTE=Awesomecaek;48012242]I am not primarily studying maths. I have passed this semester's analytic calculus (mostly integral calculus of multiple variables) and THANKS FUCK that's hopefully the last exam in maths I ever had to take. This kind of complex exponential stuff is very important in signal theory though. I am mostly dealing with examination of LTI (linear, time invariant) systems right now, stability, frequency characteristics of a system, moving between differential equations, state-space description, diagrams...[/QUOTE] Do you also deal with Convolution/Cross correlation and Auto correlation (and FFT optimization of those procedures)? It's really cool stuff if you ask me. And I hope we will also have LTIs next year in faculty.
Yes, convolution and correlation were among the questions but IIRC FFT was mentioned and not really taught in depth. Generally really, most of the questions was about various ways of representing an LTI system (Differential equation, state equations, state matrices, visual graphs, transfer functions), ability to describe the system given in one form in the others, and figuring out stability and other qualities of the system (I couldn't do a Bode diagrams by hand to save my life no matter how many hours I spent trying to figure out it). I passed with a wonderfully secure E which is all I have ever wanted.
Sorry, you need to Log In to post a reply to this thread.