• Help me optimize my query
    8 replies, posted
Hey I've got a mysql database with a 15,000 record songs table and a songs_rating table with 750,000 records. Right now when I want to find the highest rated song overall it takes about 100 seconds for the query to process. I've done no optimization so far as I've never actually dealt with tables of this magnitude. Here's the query I'm using to get the overall rating of all songs. I realize this will return 15,000 results but I feel like even with that many it should go much faster than 100 seconds. SELECT AVG(r.rating) FROM `songs` s LEFT OUTER JOIN `songs_ratings` r ON r.sid = s.id GROUP BY sid;
[url=http://en.wikipedia.org/wiki/Denormalization]Denormalize[/url] the database. Store the score in the songs table (and recalculate it when necessary).
Nice tip, I was thinking that would just be unprofessional and redundant but now it makes perfect sense. Sometimes the easy solutions that seem like cheating end up being the most efficient overall. I was trying all methods of indexing and this is way better than all of that.
[QUOTE=adamjon858;26617777]Nice tip, I was thinking that would just be unprofessional and redundant but now it makes perfect sense. Sometimes the easy solutions that seem like cheating end up being the most efficient overall. I was trying all methods of indexing and this is way better than all of that.[/QUOTE] You could always have a normalised "master" database and a second de-normalised database copy which is built from the master. The second copy would be stored using memory tables and would be built when the server is restarted. Both copies should be kept up-to-date when adding, removing and updating records. This is slower when modifying records, but faster when reading. You should use indexes on every field you search by.
I've never heard of doing it that way. Honestly it seems like a lot of work for a little performance gain
[QUOTE=yngndrw;26652128]You could [b]always[/b] have a normalised "master" database and a second de-normalised database copy which is built from the master.[/QUOTE] Always??? For small sites no. hell even for medium sites no.
[QUOTE=adamjon858;26672107]Honestly it seems like a lot of work for a little performance gain[/QUOTE] Yes it's a lot of work, but while it may not be worth it for your situation I'm just giving general advise based upon your question and expanding upon [b]itsbth[/b]'s post. Regarding the "little performance gain" part, I disagree. By having a de-normalised database entirely in system memory, the performance gains when reading would be very good - It wouldn't even have to touch the hard drive which is by far the slowest part of a database. [QUOTE=timgames;26672448]Always??? For small sites no. hell even for medium sites no.[/QUOTE] I said "could always", not "should always".
How do I setup a database to be in memory though?
[QUOTE=adamjon858;26685072]How do I setup a database to be in memory though?[/QUOTE] Change the storage engine to MEMORY
Sorry, you need to Log In to post a reply to this thread.