[QUOTE=JohnnyMo1;52208231]I think his linear algebra textbook sucks, but it probably depends on what your major is and what you intend to be doing.[/QUOTE]
Ah okay, I thought it was very readable which is why I liked it. I'm Electrical Engineering actually.
Why is it that uni classes seem to put DiffeQ with Linear as one class or topic?
I spent 2 extra years due to math, so laugh all you want.
How the hell does this work, I must know the rules behind this
What is Y: 4y-2x=20
What is X: 4x divided by 2 minus y=0
[QUOTE=The bird Man;52211009]I spent 2 extra years due to math, so laugh all you want.
How the hell does this work, I must know the rules behind this
What is Y: 4y-2x=20
What is X: 4x divided by 2 minus y=0[/QUOTE]
Lets start with 4y - 2x = 20. First note that x and y are [i] just numbers [/i], the only issue is that we don't know their specific value. Next remember that equations like these are just seesaws which must be balanced at all times - what we do to one side, we must do to the other.
We get:
4y - 2x = 20
4y - 2x + 2x = 20 + 2x [We added +2x to both sides, to cancel out that -2x on the LHS]
4y = 20 + 2x
4y/4 = (20 + 2x)/4 [Divided both sides by 4.]
y = (20 + 2x)/4.
Does this help?
How do y'all feel about number theory
[QUOTE=ZenX2;52213090]How do y'all feel about number theory[/QUOTE]
I took a course on "Abstract algebraic number theory" and hated it so much I switched my specialisation from discrete math to probability.
Basic number theory is pretty fun though, its just the high-level stuff that becomes a slog.
[QUOTE=ZenX2;52213090]How do y'all feel about number theory[/QUOTE]
Still coming to terms with the tie between complex numbers and quaternions.
Maths has some real funky relationships.
Got a cool textbook on discrete maths that I look at every now and again.
[QUOTE=Wunce;52211772]Lets start with 4y - 2x = 20. First note that x and y are [i] just numbers [/i], the only issue is that we don't know their specific value. Next remember that equations like these are just seesaws which must be balanced at all times - what we do to one side, we must do to the other.
We get:
4y - 2x = 20
4y - 2x + 2x = 20 + 2x [We added +2x to both sides, to cancel out that -2x on the LHS]
4y = 20 + 2x
4y/4 = (20 + 2x)/4 [Divided both sides by 4.]
y = (20 + 2x)/4.
Does this help?[/QUOTE]
fyi today you saved my balls, thanks man!
I'm taking Theory of computation next year. I feel pretty weak on basic algebra. I know the rules and everything, but I always feel lost when I need to do something like induction where I have to prove p(n) + (n+1) = p(n+1). Also I suck at induction.
What can I read or practice to be prepared for a class that I'm told will be mostly proofs?
So nobody has ever explained the chain rule and recursives to me in detail, could someone explain both how they work and their usage for me?
You can liken the chain rule to peeling an onion. If hou have many layers (functions) in the onion, you have to peel (differentiate) the outer layers before the inner ones. Basically, when you have a function within a function within a function you differentiate the "outer one" first, then the second and finally the third and last one.
If we're given three functions f(x), u(x), v(x) and want to differentiate it we have to apply the chain rule. For three functions it would look like this:
d/dx f(u(v(x))) = f'(u(v(x)))*u'(v(x))*v'(x)
But in reality is should be more general because it can be done for as many functions there are inside another function.
What you're doing is differentiating the outermost function like you would normally then multiply it with the next function (layer).
Let's say we have f(x) = sin(x), u(x) = e^x and v(x) = x^2. Then f(u(v(x))) would be
[img]https://latex.codecogs.com/gif.latex?%5CLARGE%20sin%28e%5E%7Bx%5E%7B2%7D%7D%29[/img]
And we want to differentiate it, first we take care of the outermost function (layer), sin(x), which becomes cos(x). This gives us
[img]https://latex.codecogs.com/gif.latex?%5CLARGE%20cos%28e%5E%7Bx%5E%7B2%7D%7D%29[/img]
which we have to [I]multiply[/I] with the derivative of the next function (layer). Our next function is e^x, and its derivative is just e^x. Then we have:
[img]https://latex.codecogs.com/gif.latex?%5CLARGE%20cos%28e%5E%7Bx%5E%7B2%7D%7D%29%5Ccdot%20e%5E%7Bx%5E%7B2%7D%7D[/img]
Now we have just one function (layer), that is v(x) = x^2, and its derivative is 2x. We [I]multiply[/I] what we got with the derivative of the last function, and we get:
[img]https://latex.codecogs.com/gif.latex?%5CLARGE%20cos%28e%5E%7Bx%5E%7B2%7D%7D%29%5Ccdot%20e%5E%7Bx%5E%7B2%7D%7D%20%5Ccdot%202x[/img]
I'm sorry if this isn't very clear in the way I explained it. I'm going afk for a while so hopefully someone could make this a bit clearer if there are some thoughts and questions still left unanswered.
[QUOTE=Matthew0505;52226885]Does the axiom of extensionality in ZFC combined with the law of the indiscernibility of identicals automatically rule out ur-elements? If there exists a set containing something that wasn't a set, then I could derive a contradiction with the ZFC extensionality and law of IoI.
e.g.
given ur-elements u, v
given sets a, b, c
a = {c, u}
b = {c, v}
According to the axiom of extensionality, a and b are equal, but according to the law of IoI:
a = b implies forall predicates P(P(a) if and only if P(b))
but I could make a predicate Q(x) asking if ur-element u is contained in x. Then use this Q(x) to create an exception
Q(a) if and only if Q(b)
and using the converse of material implication (a implies b := not b implies not a) to derive both of these statements
not(a = b) according to law of IoI
a = b according to ZFC axiom of extensionality[/QUOTE]
I'm far from a set theorist or logician, but doesn't ZFC only talk about sets? Does it even make sense to apply the axioms to things which are specifically not sets?
Also, unrelated: my algebraic topology final was awful but I somehow managed to get a B+ in the class. :v:
The only objects in ZFC are sets. However, I think the question is whether ZFC is consistent with an axiom that adds ur-elements.
As for the argument, how would you go about constructing the predicate Q? It has been a while since I did set theory, and I don't really know anything about ur-elements, but I believe that there might be a problem with Q actually existing.
Also a more pathological problem with the proof would be that there could only exists 1 ur-element, and that it is in every set.
[editline]16th May 2017[/editline]
Unrelated, but thinking about those spooky infinite sums like 1-1+1-1..., do we get that in a group that the operation is no longer associative (or even commutative if it is abelain) for infinite strings?
Eg for a not equal to the identity 0,
(a-a) + (a-a) + (a-a)... = 0
a + (-a + a) + (-a +a) + ... = a
which appears like a breakdown of associativity for infinitely many elements.
[QUOTE=Dr.C;52208243]Has anyone taken differential geometry? I'm about to finish my course in it and I still don't have a good idea of what this class is about despite getting good grades on nearly everything.
So far it's good for making maps and I think its best application would be for plotting missions on foreign bodies. Such as if you have the radar scannings of a planet's surface, you can use them to calculate the shortest routes(shortest route becomes a lot more complicated when you're travelling on a surface) for a lander on the surface of a planet or an asteroid. This would explain why the NASA rep in my class brought up that he was taking the class to learn how to map Venus' surface from the radar scans.[/QUOTE]
Differential geometry is one of the mathematical underpinnings of general relativity. In the absence of external forces (excluding gravity which isn't really even seen as a force anymore in GR) objects follow geodesics through spacetime.
[QUOTE=doom1337;52235880]
Unrelated, but thinking about those spooky infinite sums like 1-1+1-1..., do we get that in a group that the operation is no longer associative (or even commutative if it is abelian) for infinite strings?
Eg for a not equal to the identity 0,
(a-a) + (a-a) + (a-a)... = 0
a + (-a + a) + (-a +a) + ... = a
which appears like a breakdown of associativity for infinitely many elements.[/QUOTE]
What does an infinite sum like that even [i]mean[/i]?
If we go back to the standard definition of infinite sum from real analysis, it is the limit of the sequence s_1, s_2, s_3, ..., where s_n is the nth partial sum. If you play with associativity, you are effectively changing the order of the sequence we are summing. This changes the value of the sum unless the sequence is [i]absolutely convergent[/i].
IIRC associativity simply states (a+b)+c = a+(b+c). Applications to larger sums would be proven by induction, which means that we haven't proven squat about the infinite case.
TLDR: associativity is fine, infinite sums are weird.
Dope, I figured it would involve me not knowing the definition of something. I really should take real a analysis course soon :v
snip just realised a mistake I made.
Infinite sums do some weird shit.
Is there a fast way, or a formula to find all solutions of z^m=n, where z is complex. for example solutions for z^3=-2.
Check out [URL="https://en.wikipedia.org/wiki/Root_of_unity"]the roots of unity[/URL]
That is exactly what i need, thank you. argn would be pi for negative reals though, right?
Whats the reasoning behind this second inequality( less than 1/(2sqrt(x))? n are natural numbers.
[img]http://i.imgur.com/k3heovp.jpg[/img]
If you have infinite recursion, which is a bug, how do you determine it should end Matthev0505?
Started a statistics Coursera course:
[quote]If the variable is numerical, further classify as continuous or discrete based on whether or not the variable can take on an infinite number of values or only non-negative whole numbers, respectively.[/quote]
[quote]...an infinite number of values...[/quote]
[quote]...or only non-negative whole numbers...[/quote]
:|
[editline]14th July 2017[/editline]
Also, if anyone has a working knowledge of 1-category theory and is interested in learning about topoi, this paper is really awesome:
[url]https://arxiv.org/abs/1012.5647[/url]
[QUOTE=Matthew0505;52470879]Should matrices be considered as representations of linear transformations (functions between vector spaces that have the linear identities) or as linear transformations themselves?
For example, with square matrices used for transforms, they are essentially treated as functions with matrix multiplication for function composition.
Is the indexed form of matrices just for making problem solving tractable or does the structure matter beyond the linear transformations that they generate?[/QUOTE]
The former, they're a bookkeeping tool. I think one of the most obvious ways to realize that is that their form is basis-dependent but the linear transformations themselves are not.
[editline]15th July 2017[/editline]
Mirzakhani died :<
[QUOTE=JohnnyMo1;52471306]
Mirzakhani died :<[/QUOTE]
What?! ):
So weird to recall that I was just checking how old was she like a day before her death.
I never gave a shit about Fourier analysis or programming in undergrad, but I just wrote a little thing in Python that smoothes stock market data with Fourier transforms. It's pretty neat that you can do that with 20 lines of python (with the help of numpy for the FFT).
I'm trying to come up with a sin function for which the max is 20 and the minimum is -12 where f(0) = 0
[QUOTE=proboardslol;52565999]I'm trying to come up with a sin function for which the max is 20 and the minimum is -12 where f(0) = 0[/QUOTE]
f(x) = 16*sin(x-0.25268026) + 4
there ya go
idk how accurate you needed the phase shift to be, but f(0) is pretty close to 0 with this
Guess there are no other set theorists on here :v:
If you're at a university, and your university has access to ebooks on Springer, and they're published 2005 or more recently, you can get a softcover print copy for $25.
I just ordered Shafarevich I. I'm excited for it to get here. $25 is a steal.
Sorry, you need to Log In to post a reply to this thread.