• Fanatical VPS => Afterburst w/ 30% LIFE discount! (+ facepunch bonus!)
    240 replies, posted
I don't want to give you guys bad PR or anything, but telling me my transfer is completed before verifying that it was completed is something that should absolutely never happen in the future. I shouldn't have to argue facts like that with support. Other than that, the server is running fine on US node.
[QUOTE=gparent;38147888]I don't want to give you guys bad PR or anything, but telling me my transfer is completed before verifying that it was completed is something that should absolutely never happen in the future. I shouldn't have to argue facts like that with support. Other than that, the server is running fine on US node.[/QUOTE] Could you PM/post me your ticket#? I can't imagine any of us would knowingly notify of a complete transfer before it's actually complete...
[QUOTE=gparent;38147888]I don't want to give you guys bad PR or anything, but telling me my transfer is completed before verifying that it was completed is something that should absolutely never happen in the future. I shouldn't have to argue facts like that with support. Other than that, the server is running fine on US node.[/QUOTE] That was merely a bug in the vzmigrate command; your container's config wasn't transferred successfully (and vzmigrate didn't notify me of that) - your VPS itself had actually been transferred. This caused the old German VPS to stay online, and network problems on the new US VPS. Nick solved that shortly afterwards. Migrating a server is quite a standard process, but as with anything - something can and eventually will go wrong. [editline]23rd October 2012[/editline] [QUOTE=Fizzadar;38148201]Could you PM/post me your ticket#? I can't imagine any of us would knowingly notify of a complete transfer before it's actually complete...[/QUOTE] It's #242855 :wink:
No worries, you guys are managing to give me pretty good service for the price I'm paying and how much staff is handling it. I simply wanted to point it out explicitly - I'm a sysadmin myself and that's how you end up learning the most. Seeing you explaining the issue exactly as it happened gives me confidence and that was half the point of my post as well. I can see how vzmigrate not notifying of config failure could make the issue a bit more sneaky. Thank you guys for the servers.
Can we get some info on how the network is setup? I can't contact serverfault.com from my (US) VPSes, and the first hop out seems to be a RFC1918 address which is weird.
Very nice prices and specs indeed! I will get the huge one once it's out in the US for beta testing my Open Source project: [URL="http://facepunch.com/showthread.php?t=1220523"]Server Control Panel - For Game Server Hosting Companies[/URL]
[QUOTE=gparent;38174056]Can we get some info on how the network is setup? I can't contact serverfault.com from my (US) VPSes, and the first hop out seems to be a RFC1918 address which is weird.[/QUOTE] If you restart your VPS you'll see the first hop is no longer RFC1918 (damn OpenVZ and two NIC's). However traffic is still failing to reach Serverfault and as per the ticket is annoyingly far outside our network: [root@corvette ~]# mtr --report --report-cycles=30 serverfault.com HOST: corvette.afterburst.com Loss% Snt Last Avg Best Wrst StDev 1. 96.47.230.33.static.quadrane 0.0% 30 0.2 0.5 0.2 8.0 1.4 2. 173.44.32.249 0.0% 30 0.3 0.9 0.3 11.1 2.3 3. te2-3.ccr01.mia05.atlas.coge 0.0% 30 0.5 16.5 0.4 189.5 43.6 4. te2-7.ccr01.mia08.atlas.coge 0.0% 30 98.8 24.4 0.5 158.3 45.9 5. te8-3.ccr01.mia01.atlas.coge 0.0% 30 0.5 58.7 0.4 214.1 70.3 6. te8-8.ccr01.mia03.atlas.coge 0.0% 30 0.6 48.9 0.5 237.5 70.3 7. te4-1.ccr02.mia03.atlas.coge 0.0% 30 0.6 53.3 0.6 216.8 67.1 8. ge-7-4.car2.Atlanta1.Level3. 0.0% 30 0.5 1.8 0.5 40.6 7.3 9. ae-32-52.ebr2.Miami1.Level3. 0.0% 30 4.2 1.9 0.5 5.0 1.6 10. ae-2-2.ebr2.Atlanta2.Level3. 0.0% 30 14.6 14.3 13.7 25.0 2.1 11. ae-1-100.ebr1.Atlanta2.Level 0.0% 30 13.8 13.8 13.7 13.9 0.0 12. ae-6-6.ebr1.Washington12.Lev 0.0% 30 36.6 33.4 28.3 37.3 2.4 13. ae-1-100.ebr2.Washington12.L 0.0% 30 28.3 31.9 28.2 40.4 5.2 14. 4.69.148.49 0.0% 30 33.7 32.2 31.7 33.8 0.8 15. ae-91-91.csw4.NewYork1.Level 0.0% 30 31.7 32.1 31.7 42.6 2.0 16. ae-4-90.edge1.NewYork1.Level 0.0% 30 31.8 33.5 31.7 63.9 6.8 17. ??? 100.0 30 0.0 0.0 0.0 0.0 0.0 18. gig2-0.nyc-gsr-b.peer1.net 0.0% 30 32.0 32.1 31.9 34.0 0.4 19. 64.34.60.18 0.0% 30 32.2 32.7 32.0 40.1 2.0 20. ??? 100.0 30 0.0 0.0 0.0 0.0 0.0 [root@corvette ~]# Interestingly we can't access any of the stack exchanges via curl, traceroute or ICMP :/
That's us launched the first US node publicly, you can order from [url=http://afterburst.com/unmetered-vps]here[/url] Both the 30% discount and Facepunch discount are still active.
I've updated the first post also. US info ([url]http://afterburst.com/datacenters):[/url] [B]Locations:[/b] + Miami, USA - Test IPv4: [url]96.47.230.66[/url] - Test IPv6: [url]2607:ff48:1:2::2fa:90fe[/url] - Test Download: [url]http://96.47.230.66/bigtest.tgz[/url] + Falkenstein, Germany - Test IPv4: [url]88.198.224.126[/url] - Test IPv6: [url]2a01:4f8:121:143::d562:ac36[/url] - Test Download: [url]http://88.198.224.126/bigtest.tgz[/url]
[QUOTE=Fizzadar;38175973]If you restart your VPS you'll see the first hop is no longer RFC1918 (damn OpenVZ and two NIC's). However traffic is still failing to reach Serverfault and as per the ticket is annoyingly far outside our network: [root@corvette ~]# mtr --report --report-cycles=30 serverfault.com HOST: corvette.afterburst.com Loss% Snt Last Avg Best Wrst StDev 1. 96.47.230.33.static.quadrane 0.0% 30 0.2 0.5 0.2 8.0 1.4 2. 173.44.32.249 0.0% 30 0.3 0.9 0.3 11.1 2.3 3. te2-3.ccr01.mia05.atlas.coge 0.0% 30 0.5 16.5 0.4 189.5 43.6 4. te2-7.ccr01.mia08.atlas.coge 0.0% 30 98.8 24.4 0.5 158.3 45.9 5. te8-3.ccr01.mia01.atlas.coge 0.0% 30 0.5 58.7 0.4 214.1 70.3 6. te8-8.ccr01.mia03.atlas.coge 0.0% 30 0.6 48.9 0.5 237.5 70.3 7. te4-1.ccr02.mia03.atlas.coge 0.0% 30 0.6 53.3 0.6 216.8 67.1 8. ge-7-4.car2.Atlanta1.Level3. 0.0% 30 0.5 1.8 0.5 40.6 7.3 9. ae-32-52.ebr2.Miami1.Level3. 0.0% 30 4.2 1.9 0.5 5.0 1.6 10. ae-2-2.ebr2.Atlanta2.Level3. 0.0% 30 14.6 14.3 13.7 25.0 2.1 11. ae-1-100.ebr1.Atlanta2.Level 0.0% 30 13.8 13.8 13.7 13.9 0.0 12. ae-6-6.ebr1.Washington12.Lev 0.0% 30 36.6 33.4 28.3 37.3 2.4 13. ae-1-100.ebr2.Washington12.L 0.0% 30 28.3 31.9 28.2 40.4 5.2 14. 4.69.148.49 0.0% 30 33.7 32.2 31.7 33.8 0.8 15. ae-91-91.csw4.NewYork1.Level 0.0% 30 31.7 32.1 31.7 42.6 2.0 16. ae-4-90.edge1.NewYork1.Level 0.0% 30 31.8 33.5 31.7 63.9 6.8 17. ??? 100.0 30 0.0 0.0 0.0 0.0 0.0 18. gig2-0.nyc-gsr-b.peer1.net 0.0% 30 32.0 32.1 31.9 34.0 0.4 19. 64.34.60.18 0.0% 30 32.2 32.7 32.0 40.1 2.0 20. ??? 100.0 30 0.0 0.0 0.0 0.0 0.0 [root@corvette ~]# Interestingly we can't access any of the stack exchanges via curl, traceroute or ICMP :/[/QUOTE] Hi, Thanks for the info for the reboot :) I did open up a question on their meta site. The last hop, 64.34.60.18, is stackoverflow's edge router, so it's probably a route or an ACL on their end. At least now that we don't have a RFC1918 first hop we can rule out ICMP issues and dropped bogon packets.
The FACEPUNCH5 code doesn't work with the US plans. Was this intentional?
[QUOTE=mechanarchy;38191508]The FACEPUNCH5 code doesn't work with the US plans. Was this intentional?[/QUOTE] Whoops! Fixed :)
[QUOTE=Fizzadar;38192379]Whoops! Fixed :)[/QUOTE] Awesome, just placed my order :)
Just bought a VPS from you guys :) Impatiently waiting for it to be setup so I can get to work :P
[QUOTE=advil0;38252674]Just bought a VPS from you guys :) Impatiently waiting for it to be setup so I can get to work :P[/QUOTE] And it's been processed :wink:
Did you guys reboot Corvette without telling anyone?
[QUOTE=gparent;38296786]Did you guys reboot Corvette without telling anyone?[/QUOTE] It's looking like there was a huge SYN flood last night. Whether or not Nick or Charlie rebooted the node - I'm not sure. I'll leave your ticket open for them to see (in case they did). The attack was 600mbps, so I would presume around 400kpps. Sometimes at that level of attack the node appears completely unresponsive (rather than just having packet loss), so rebooting it sometimes seems sensible at the time. We email regarding: 1) Scheduled downtime 2) Extended unscheduled downtime (e.g. > 5 minutes, or downtime out of our control (network outage/datacenter problem/etc) ) And for the rest, tell interested clients if they ask :wink:.
You don't notify people of unscheduled downtime? And reboot machines which could have important applications running because of network issues? I was gonna recommend this service to my boss but maybe not now.
Also seems to be a huge lack of communication between staff.
[QUOTE=Catdaemon;38299829]You don't notify people of unscheduled downtime? And reboot machines which could have important applications running because of network issues? I was gonna recommend this service to my boss but maybe not now.[/QUOTE] It's not really my place to question how you do technical support to your server, but it seems like you're telling me the reboot wasn't even necessary in the first place. Even if it were, no hosting service of any kind should reboot your servers without telling you, in fact it's the reason I left my previous provider in the first place. I understand this isn't a 50 people enterprise, but there are basic things that a hosting service must do to remain relevant, no matter how much you pay or how many staff are handling things. Telling people about expected and unexpected downtime is part of this. I know I whine a lot and I don't wanna seem like a know-it-all. I just know that unless you can afford me on payroll there's not much I can do to help except complaining about things like this. Also, people rating cat dumb have no idea how business works. It's a matter of reliability and downtime planning, and that's much more important than how much downtime there is if you ask me.
[QUOTE=gparent;38301377]It's not really my place to question how you do technical support to your server, but it seems like you're telling me the reboot wasn't even necessary in the first place. Even if it were, no hosting service of any kind should reboot your servers without telling you, in fact it's the reason I left my previous provider in the first place. I understand this isn't a 50 people enterprise, but there are basic things that a hosting service must do to remain relevant, no matter how much you pay or how many staff are handling things. Telling people about expected and unexpected downtime is part of this. I know I whine a lot and I don't wanna seem like a know-it-all. I just know that unless you can afford me on payroll there's not much I can do to help except complaining about things like this. Also, people rating cat dumb have no idea how business works. It's a matter of reliability and downtime planning, and that's much more important than how much downtime there is if you ask me.[/QUOTE] This is the same reason I left AllGamer LLC a couple months back. The datacenter had downtime (their billing center was online) and it was down for a good three hours. During this time, nobody answered my tickets, calls and no voicemails I left were returned. After around two hours I checked their twitter and low behold they mentioned the downtime there. But then when they finally answered the ticket: [QUOTE]Hi Tyler, This afternoon there were was a severe network issue. The issue was on the datacenter's equipment therefore we could not fix it. It has been fixed now and you shouldn't have any further issues. [B]We didn't respond to tickets at the time as we didn't have any information about the issue.[/B] I have added a credit of $2 to your account. Best regards, Scott Dollins Systems Administrator Senior Management[/QUOTE] They saw the ticket early on and ignored it since they had no information. They couldn't even be bothered to shoot off a short response to me until the issue was fixed. Since I was running a mission critical website at the time and was losing money every minute it was down, I left.
[QUOTE=Catdaemon;38299829]You don't notify people of unscheduled downtime? And reboot machines which could have important applications running because of network issues? I was gonna recommend this service to my boss but maybe not now.[/QUOTE] We do, if it was more than 4 minutes - hence my mention of extended downtime. It might be a better idea for us to have our status page (status.fanaticaldev.com) show recent issues instead of emailing people. In this case: I wasn't woken up at 4AM by a Pingdom SMS, so Nick will have handled it within 4 minutes (if it wasn't handled automatically). Regarding reboots: I'd hope if an important service was running it would be scheduled to start on boot :wink: [QUOTE=Jelly;38299909]Also seems to be a huge lack of communication between staff.[/QUOTE] During weekends (especially early in the morning; like this outage), we definitely do communicate less than normal: I won't deny that. If there's a serious problem though, we do SMS or call each other if we are not all online. [QUOTE=gparent;38301377]It's not really my place to question how you do technical support to your server, but it seems like you're telling me the reboot wasn't even necessary in the first place. Even if it were, no hosting service of any kind should reboot your servers without telling you, in fact it's the reason I left my previous provider in the first place. I understand this isn't a 50 people enterprise, but there are basic things that a hosting service must do to remain relevant, no matter how much you pay or how many staff are handling things. Telling people about expected and unexpected downtime is part of this. I know I whine a lot and I don't wanna seem like a know-it-all. I just know that unless you can afford me on payroll there's not much I can do to help except complaining about things like this. 9 Also, people rating cat dumb have no idea how business works. It's a matter of reliability and downtime planning, and that's much more important than how much downtime there is if you ask me.[/QUOTE] I'm thinking of suggesting to Nick that we have a section on our status page for such matters - as of course, transparency is always good. But, over-communication is also a problem (we can't be emailing all clients affected by a 5 second outage, for example). The status page would be a good solution for that, in my opinion. [editline].[/editline] Just had an even better idea: We could make a little "subscribe" part on that status page; so clients who want to be notified about tiny downtimes would be emailed as soon as we issue a status update (could refine that to updates that affect services they have in their account)
You definitely should have a status page, but that should complement emails, not replace them. You should warn on every single reboot, no matter if they were emergency ones or not. For [B]unexpected[/B] network outages, I'm okay if you don't email me immediately and wait at least 2 minutes. For [B]expected[/B] network outages, it doesn't matter if it takes half a second or an hour, you need to send one out. Status emails should also be opt-out rather than opt-in; anybody who knows what they are doing expects to be emailed when there is downtime, so it's the logical default. When in doubt, do what Linode does. If they have emergency downtime, they tell me, even if it's 1 minute before rebooting my server and I won't have time to read it. [quote]Regarding reboots: I'd hope if an important service was running it would be scheduled to start on boot[/quote] How people have their VMs setup is irrelevant really. Good luck restoring that SIP/RTP session on boot.
Let me quickly clear this all up: [b]Unexpected [Minor] Outages[/b]: if the drop is under 5 minutes and not complete (100% loss) we don't notify. These kind of drops normally only affect one route. [b]Reboots, long outages[/b]: we will always notify of any reboots or extended outages (though this may be 5-10 minutes in depending on the situation). In the case of Corvette this was a genuine mistake, the node shouldn't have been rebooted at all. [b]Expected Outages/Maintenance[/b]: always notified an hour at least in advance (where failing drives are being replaced) or at least a few days (when adding hardware/similar). We try to be an open as possible when it comes to any issues on our servers, Corvette's reboot was a one-off and will not happen again.
Alright, glad we're all clear on the reboot part. I'm okay with not being notified on very small outages, and the rest seems fine.
Well, I ordered one just a few minutes ago (moving from vpsno.de aka allgamer), but now I'm kind of worried from reading this: Except for a few times where it was out of control they never had to restart my vps, and I've easily had uptimes of a few months. From what I'm reading here, it seems like it is going down every few DAYS.
[QUOTE=ShaRose;38340713]Well, I ordered one just a few minutes ago (moving from vpsno.de aka allgamer), but now I'm kind of worried from reading this: Except for a few times where it was out of control they never had to restart my vps, and I've easily had uptimes of a few months. From what I'm reading here, it seems like it is going down every few DAYS.[/QUOTE] Well in my case it only went down once due to something out of my control, the rest of the times it was me opening a ticket about something.
[QUOTE=ShaRose;38340713]Well, I ordered one just a few minutes ago (moving from vpsno.de aka allgamer), but now I'm kind of worried from reading this: Except for a few times where it was out of control they never had to restart my vps, and I've easily had uptimes of a few months. From what I'm reading here, it seems like it is going down every few DAYS.[/QUOTE] If we had downtimes every few days we'd never have any customers! Almost all our servers are currently on ~50 days uptime (47 days ago today we upgraded most German servers to 4 drives - hence only ~50). Unless absolutely necessary (hardware failures & kernel crashes) our nodes are never rebooted :)
Well, I'm still (slowly) getting everything set up, but I got GWan set up again, and I can't believe I was worried about java. [code]root@ShaRose:~# weighttp -n 1000000 -c 100 -t 4 -k "localhost:8080/100.html" weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 25 concurrent requests, 250000 total requests spawning thread #2: 25 concurrent requests, 250000 total requests spawning thread #3: 25 concurrent requests, 250000 total requests spawning thread #4: 25 concurrent requests, 250000 total requests progress: 10% done progress: 20% done progress: 30% done progress: 40% done progress: 50% done progress: 60% done progress: 70% done progress: 80% done progress: 90% done progress: 100% done finished in 5 sec, 516 millisec and 116 microsec, 181286 req/s, 66743 kbyte/s requests: 1000000 total, 1000000 started, 1000000 done, 1000000 succeeded, 0 failed, 0 errored status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 377000000 bytes total, 277000000 bytes http, 100000000 bytes data[/code] [code]root@ShaRose:/gwan# weighttp -n 1000000 -c 100 -t 4 -k "localhost:8080/?hello.java" weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 25 concurrent requests, 250000 total requests spawning thread #2: 25 concurrent requests, 250000 total requests spawning thread #3: 25 concurrent requests, 250000 total requests spawning thread #4: 25 concurrent requests, 250000 total requests progress: 10% done progress: 20% done progress: 30% done progress: 40% done progress: 50% done progress: 60% done progress: 70% done progress: 80% done progress: 90% done progress: 100% done finished in 5 sec, 382 millisec and 30 microsec, 185803 req/s, 53345 kbyte/s requests: 1000000 total, 1000000 started, 1000000 done, 1000000 succeeded, 0 failed, 0 errored status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 293999890 bytes total, 275999890 bytes http, 18000000 bytes data[/code] That'll do vps. That'll do.
Hey guys. Is it possible to have a live chat to discuss my latest ticket?
Sorry, you need to Log In to post a reply to this thread.