What's new
Fantasy Football - Footballguys Forums

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

MFL Down! (1 Viewer)

Yahoo was down for the entire morning of the day that brackets were due (at noon) this year by the way. Many people didn't get their bracket submitted because of it.

 
Hi everyone! I'm new to this site but have been playing fantasy football on espn for awhile. My question is why does everyone on this site seem to prefer MFL to other sites? Don't they charge a fee? Thanks!
I think if you are just playing normal redraft, you can't go wrong with espn, yahoo or one of the other free sites.

If you are playing dynasty, MFL is the way to go. You can customize it to do whatever you want to do really easily. It automatically carries forward your rosters, draft picks, contracts if you are doing a salary cap league. You can do drafts, auctions or both. It has your future rookie picks so you can trade them, etc. It only ends up being about $5 per person each year, so not too bad.

It took me a year to really get used to it and I actually kind of hated it at first. But once I took a little time to play around with it, I was pretty amazed by all the features on it and how easy it was to do so many different things.

MFL and Leaguesafe are the 2 musts for me personally if I am looking to join a new dynasty league.

 
Come on guys....you are all better than this....or at least can be, especially towards new members. Shame on you Hammond. You are now staff and should refrain from piling on.
That didn't come off as piling on a new guy to me. Seemed like he was making fun of the people complaining about MFL going down for 12 hours who have no actual clue how server infrastructures work and just like to complain and say "I thought this was 2014! This stuff shouldn't be happening". MFL could have been hosted directly from Google headquarters and still went down. As I said earlier in this thread, full off-site emergency backups cost a ton of money. I know we pay MFL and all, but not likely enough for that kind of infrastructure backbone.I'm sure they have everything backed up in multiple off site locations. But there's a difference between hard copy backups and full off-site backups with emergency dns switching and such.
You know, instead of just constantly bashing me (or anyone else) for not understanding what the problem was/is how about you educate me on it if you have the time. Ok, the provider went down; peoples requests to the site could not be completed therefore outage. I'm sure theres more to it so please help

 
Funny and childish is you not realizing that Kelly was making statements about technology that he clearly does not know what he was talking about. But sure implied how simple the solution for MFL should have been. This has become a pet peeve of mine in modern society - "everyone is an expert about everyone else's job/area of expertise....just ask them." Of course if you actually work in the field, you realize how stupid most of the sideline comments really are in relation to what actual happens.

As to the childish brilliance by the OP talking about something he has no clue about I could list several. The simplest is you cannot inform the subscribers to your internet site about the site being down if the website's internet provider is down, unless you expect them to text/phone call everyone with this information - MFL had no access to the internet until their carrier fixed their problem. I won't get into the backing up routers' and other absurd technology comments.

I had a dynasty rookie draft going on that got stopped. It has created some hardship due to an owner leaving on a vacation without internet access - original time frame would have completed the rookie draft prior to his leaving. I think most people around computers realize computers break down and don't ##### about it. If the shut down lasted 2-3 days, I would agree the delay would be longer than expected and deserve some criticism. But #### happens, especially with technology - suck it up a little.
see post above, instead of throwing rocks please help me understand.

 
Come on guys....you are all better than this....or at least can be, especially towards new members. Shame on you Hammond. You are now staff and should refrain from piling on.
That didn't come off as piling on a new guy to me. Seemed like he was making fun of the people complaining about MFL going down for 12 hours who have no actual clue how server infrastructures work and just like to complain and say "I thought this was 2014! This stuff shouldn't be happening". MFL could have been hosted directly from Google headquarters and still went down. As I said earlier in this thread, full off-site emergency backups cost a ton of money. I know we pay MFL and all, but not likely enough for that kind of infrastructure backbone.I'm sure they have everything backed up in multiple off site locations. But there's a difference between hard copy backups and full off-site backups with emergency dns switching and such.
You know, instead of just constantly bashing me (or anyone else) for not understanding what the problem was/is how about you educate me on it if you have the time. Ok, the provider went down; peoples requests to the site could not be completed therefore outage. I'm sure theres more to it so please help
I'm not bashing you for not understanding. I'm bashing you for acting like you did and throwing a fit about something you didn't understand. Then when I called you out and said clearly you don't understand this, you shouldn't be mad. It's out of MFLs hands. You responded that you did understand and that's why you were so angry about it.

So you got mad about something you didn't understand. Then, pretended you did, when you clearly didn't. Just to justify your unjustifiable anger.

 
I am a commish and use MFL because its the only ff site that supports OL. They offer features other FF sites don't have. But the customer service is slower then it use to be. I have our league printed out so if they do go down I have the record of rosters since I don't want to have to remember who was on what team. MFL is the only game in town so they still are used by alot of leagues. Just remember they are located at a house which to me is a little strange. For what they charge they should at least be in office space.

 
It's probably time to put all this to bed, put aside the conflicts and the rest of the nonsense, maybe even have the thread locked. The MFL site has been up 3 1/2 days, all that is going to be back to normal is back to normal and what has been lost (auction stuff due to timer expiration) has been sorted out and whatever guys are doing to compensate they have done.

It's reasonable, if we were in Kelly's shoes waking up Saturday morning, to be alarmed in the middle of an auction that the site was down, and then to be upset that once the site came up auction stuff had disappeared (later figuring out it was due to timer expiration).

OK, so he made comments, upset at what was happening, not fully realizing how he was coming across on a board full of a lot of tech types, and got ribbed for it. I think most did so in fun, being a little mean maybe but in a locker room teasing guy thing sort of way.

But Kelly is liked here overall and spends a lot of time contributing good comments to the board generally. Apologies if I added to the kidding and it was taken badly. Let's all sing kumbaya or 99 bottles of beer on the wall or whatever, and move past this, It's over, we're all still buds.

 
Last edited by a moderator:
I am a commish and use MFL because its the only ff site that supports OL. They offer features other FF sites don't have. But the customer service is slower then it use to be. I have our league printed out so if they do go down I have the record of rosters since I don't want to have to remember who was on what team. MFL is the only game in town so they still are used by alot of leagues. Just remember they are located at a house which to me is a little strange. For what they charge they should at least be in office space.
OL as in Offensive Linemen? I'd love to hear how that works, if you've got a moment to explain.

 
They might blame it on their provider but why does a provider go down for over 12 hours without backups. The provider is probably not one of the big companies because they are half the cost. MFL customer service is not very good like it use to be. Their format is the same as 5 years ago. I found there address online and it was a house when I googled it. 3077 Sunnyside Street

Stoughton, Wisconsin 53589 That could be a huge problem using residential and not business providers and them not backing their data up themselves. I got the address from their website.
I guess coming from someone that doesn't know what hes talking about, this was kind of my question when I learned that it was a provider issue.

How come there wasn't a back up route when the main went down? I know every site is going to go down once in a while, and some companies are worth more than others and therefore have better redundancy plans than their smaller counter parts. But a 12 to 14 hr outage is a little ridiculous in today's world.

Another issue I had is why didn't MFL backup their servers from the time they found out their customers were having outages. The league I'm involved in was having an slow auction; players come off the board atfer 12 hrs so ofcourse every player on our board at the time went off during the outage.

MFL couldn't restore our auction board; how come? And I'm sure a lot of drafts were affected in some way during the outage. They had plenty of time to back up all the leagues to some kind restoration point and then could of done a mass restore once the provider came back online. That is possible, correct?

 
They might blame it on their provider but why does a provider go down for over 12 hours without backups. The provider is probably not one of the big companies because they are half the cost. MFL customer service is not very good like it use to be. Their format is the same as 5 years ago. I found there address online and it was a house when I googled it. 3077 Sunnyside Street

Stoughton, Wisconsin 53589 That could be a huge problem using residential and not business providers and them not backing their data up themselves. I got the address from their website.
I guess coming from someone that doesn't know what hes talking about, this was kind of my question when I learned that it was a provider issue.

How come there wasn't a back up route when the main went down? I know every site is going to go down once in a while, and some companies are worth more than others and therefore have better redundancy plans than their smaller counter parts. But a 12 to 14 hr outage is a little ridiculous in today's world.

Another issue I had is why didn't MFL backup their servers from the time they found out their customers were having outages. The league I'm involved in was having an slow auction; players come off the board atfer 12 hrs so ofcourse every player on our board at the time went off during the outage.

MFL couldn't restore our auction board; how come? And I'm sure a lot of drafts were affected in some way during the outage. They had plenty of time to back up all the leagues to some kind restoration point and then could of done a mass restore once the provider came back online. That is possible, correct?
Dude.....lighten up

 
I am a commish and use MFL because its the only ff site that supports OL. They offer features other FF sites don't have. But the customer service is slower then it use to be. I have our league printed out so if they do go down I have the record of rosters since I don't want to have to remember who was on what team. MFL is the only game in town so they still are used by alot of leagues. Just remember they are located at a house which to me is a little strange. For what they charge they should at least be in office space.
OL as in Offensive Linemen? I'd love to hear how that works, if you've got a moment to explain.
Under Commish SetUp you can click on Enter Custom Players - (you get 100 of these so every year after RFA you have to send email to have MFL delete FA Off) There you can enter players and the team their on. You would use Off as Position. Then do the scoring in Custom Scoring use the Off position and do scoring for OL. We use Passing/Rushing Yards, Points Scored and QB Sacked. In the comments section each week Owners must stat what OL is where 2 OT / 2 OG / 1 C to get full points, Our league allows different formations which is what MFL is known for. On defense we say owners must stat what LBers are playing OLB or MLB. We use salary cap and contract years too. 60 man roster 8 man practice squad. It's not as complicated as it sounds. This is why MFL is great. This league is modeled after efsports leagues where I use to play with some of the owners.

 
They might blame it on their provider but why does a provider go down for over 12 hours without backups. The provider is probably not one of the big companies because they are half the cost. MFL customer service is not very good like it use to be. Their format is the same as 5 years ago. I found there address online and it was a house when I googled it. 3077 Sunnyside Street

Stoughton, Wisconsin 53589 That could be a huge problem using residential and not business providers and them not backing their data up themselves. I got the address from their website.
I guess coming from someone that doesn't know what hes talking about, this was kind of my question when I learned that it was a provider issue.

How come there wasn't a back up route when the main went down? I know every site is going to go down once in a while, and some companies are worth more than others and therefore have better redundancy plans than their smaller counter parts. But a 12 to 14 hr outage is a little ridiculous in today's world.

Another issue I had is why didn't MFL backup their servers from the time they found out their customers were having outages. The league I'm involved in was having an slow auction; players come off the board atfer 12 hrs so ofcourse every player on our board at the time went off during the outage.

MFL couldn't restore our auction board; how come? And I'm sure a lot of drafts were affected in some way during the outage. They had plenty of time to back up all the leagues to some kind restoration point and then could of done a mass restore once the provider came back online. That is possible, correct?
Okay, so yes as I've explained already redundancy can be expensive. And MFL is a business so obviously they're trying to make money not spend it as much as humanly possible. Lets put it this way, even Google and Facebook go down for 5-10 minutes once every year or so. And the infrastructures they have in place for redundancy are probably worth 10+x more than MFL's entire company. So taking that into account, it's not unreasonable for MFL to drop out for 12-14 hours. Should it happen? No, absolutely not as their provider should have those contingency plans in place so that MFL doesn't have too. That's often the entire reason that you pay a bunch of money to a DC to host all your services. Clearly their provider didn't have anything in place and it sounded to me like they were probably going to be switching providers ASAP after this fiasco.

Put it this way, as someone who has worked on military bases and had the entire military AKO network go down about about 6 hours once? It's not that crazy. It happens and to avoid it costs millions if not billions of dollars in redundancy planning. The type of money that MFL doesn't have.

As for the restoration? Yeah, that's a little ridiculous. I'm not sure what kind of half rate backup plan they have but it should be backed up almost at an instananeous level. Considering their business model, rolling back can really screw people who had plans to get guys at better values in auctions etc. I'd hope they have a better plan in place next time as the restoration thing there's literally no excuse. Backup systems cost a lot as well but not nearly as much as the network issues would. It isn't unreasonable to have a live off-site backup that's consistently checking and downloading all the new database files. That said, getting that's getting a little bit too advanced for me as well. But I know at my current job we have all of our databases backed up and stored off-site with Barracuda every 120 seconds and it barely costs us anything (like $1000/month or something for unlimited data storage at multiple redundancy points). So I'm not sure why MFL couldn't have this same type of setup.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.
This actually made some sense. That said, it's still completely irrelevant, their DC lost power. As clearly stated above. Meaning they'd need an off-site backup setup in order to really have stopped this process. As for color blindness... not sure I understand that one, did someone splice a wire wrong somewhere or something because they were color blind? Cause if so that's hilarious. Color blind people probably shouldn't be involved in wiring.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.
This actually made some sense. That said, it's still completely irrelevant, their DC lost power. As clearly stated above. Meaning they'd need an off-site backup setup in order to really have stopped this process. As for color blindness... not sure I understand that one, did someone splice a wire wrong somewhere or something because they were color blind? Cause if so that's hilarious. Color blind people probably shouldn't be involved in wiring.
Once the power went down, we placed a call to our host to check in on the issue. The person we spoke with looked at our servers and told us the lights were on and the servers were running. That led us to believe that the issues continued to be on their end.

While the lights were indeed on, they weren't green -- the desired color. Unfortunately, we later found out that the person who checked is color blind and couldn't tell the difference between green and whatever color the servers showed while in standby.

Like I previously stated: Everything was the perfect storm of bad.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.
This actually made some sense. That said, it's still completely irrelevant, their DC lost power. As clearly stated above. Meaning they'd need an off-site backup setup in order to really have stopped this process. As for color blindness... not sure I understand that one, did someone splice a wire wrong somewhere or something because they were color blind? Cause if so that's hilarious. Color blind people probably shouldn't be involved in wiring.
Once the power went down, we placed a call to our host to check in on the issue. The person we spoke with looked at our servers and told us the lights were on and the servers were running. That led us to believe that the issues continued to be on their end.

While the lights were indeed on, they weren't green -- the desired color. Unfortunately, we later found out that the person who checked is color blind and couldn't tell the difference between green and whatever color the servers showed while in standby.

Like I previously stated: Everything was the perfect storm of bad.
Oh man, that's even better. So you're saying the guy couldn't tell the difference between green and amber lights and was just like "YEAH BRO! LIGHTS ARE ON! MUST BE YOUR PROBLEM!". That's failure at a higher level than I could have even imagined.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.
This actually made some sense. That said, it's still completely irrelevant, their DC lost power. As clearly stated above. Meaning they'd need an off-site backup setup in order to really have stopped this process. As for color blindness... not sure I understand that one, did someone splice a wire wrong somewhere or something because they were color blind? Cause if so that's hilarious. Color blind people probably shouldn't be involved in wiring.
I understand, just hitting back at the guy(s) mocking my "back up the routers" statement.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.
This actually made some sense. That said, it's still completely irrelevant, their DC lost power. As clearly stated above. Meaning they'd need an off-site backup setup in order to really have stopped this process. As for color blindness... not sure I understand that one, did someone splice a wire wrong somewhere or something because they were color blind? Cause if so that's hilarious. Color blind people probably shouldn't be involved in wiring.
Once the power went down, we placed a call to our host to check in on the issue. The person we spoke with looked at our servers and told us the lights were on and the servers were running. That led us to believe that the issues continued to be on their end.

While the lights were indeed on, they weren't green -- the desired color. Unfortunately, we later found out that the person who checked is color blind and couldn't tell the difference between green and whatever color the servers showed while in standby.

Like I previously stated: Everything was the perfect storm of bad.
Oh man, that's even better. So you're saying the guy couldn't tell the difference between green and amber lights and was just like "YEAH BRO! LIGHTS ARE ON! MUST BE YOUR PROBLEM!". That's failure at a higher level than I could have even imagined.
That would be correct.

We were told the lights were on and servers were running. As a result, we believed the issues to be on their end given that it was a power outage that started the entire process.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.
This actually made some sense. That said, it's still completely irrelevant, their DC lost power. As clearly stated above. Meaning they'd need an off-site backup setup in order to really have stopped this process. As for color blindness... not sure I understand that one, did someone splice a wire wrong somewhere or something because they were color blind? Cause if so that's hilarious. Color blind people probably shouldn't be involved in wiring.
Once the power went down, we placed a call to our host to check in on the issue. The person we spoke with looked at our servers and told us the lights were on and the servers were running. That led us to believe that the issues continued to be on their end.

While the lights were indeed on, they weren't green -- the desired color. Unfortunately, we later found out that the person who checked is color blind and couldn't tell the difference between green and whatever color the servers showed while in standby.

Like I previously stated: Everything was the perfect storm of bad.
Oh man, that's even better. So you're saying the guy couldn't tell the difference between green and amber lights and was just like "YEAH BRO! LIGHTS ARE ON! MUST BE YOUR PROBLEM!". That's failure at a higher level than I could have even imagined.
That would be correct.

We were told the lights were on and servers were running. As a result, we believed the issues to be on their end given that it was a power outage that started the entire process.
ouch, that sucks... did the janitor answer the phone?

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.
This actually made some sense. That said, it's still completely irrelevant, their DC lost power. As clearly stated above. Meaning they'd need an off-site backup setup in order to really have stopped this process. As for color blindness... not sure I understand that one, did someone splice a wire wrong somewhere or something because they were color blind? Cause if so that's hilarious. Color blind people probably shouldn't be involved in wiring.
Once the power went down, we placed a call to our host to check in on the issue. The person we spoke with looked at our servers and told us the lights were on and the servers were running. That led us to believe that the issues continued to be on their end.

While the lights were indeed on, they weren't green -- the desired color. Unfortunately, we later found out that the person who checked is color blind and couldn't tell the difference between green and whatever color the servers showed while in standby.

Like I previously stated: Everything was the perfect storm of bad.
Oh man, that's even better. So you're saying the guy couldn't tell the difference between green and amber lights and was just like "YEAH BRO! LIGHTS ARE ON! MUST BE YOUR PROBLEM!". That's failure at a higher level than I could have even imagined.
That would be correct.

We were told the lights were on and servers were running. As a result, we believed the issues to be on their end given that it was a power outage that started the entire process.
ouch, that sucks... did the janitor answer the phone?
Total assumption on my part, but I'm guessing the fact that it was Memorial Day weekend didn't help things as far as the A-Team being on duty.

 
Guys-

All comments in this thread are greatly appreciated. Even the criticisms ... We have to be better than we were Friday night/Saturday morning.

Without going deep into specifics, we'll just say that it was a perfect storm of not good. Our hosting site losing power, Memorial Day weekend, new servers and color blindness ... All played a role in what was an unacceptable amount of downtime.

Going forward, steps are in place to make sure that THIS issue won't bring the site down for more than a few minutes. While being down for 10 hours wasn't fun, we're treating as a learning experience that will only help us in the future.

We appreciate your support and thank you for using MFL!
Please consider backing up your routers in the future. TIA.
I'm pretty sure they TFTP into their Routers and Switches every quarter or so and copy the configs into a backup HDD. And I'm also sure they have hot and cold spare routers/switches on site ready to take if one goes down and the switches are most likely in a stack config incase one dies.
This actually made some sense. That said, it's still completely irrelevant, their DC lost power. As clearly stated above. Meaning they'd need an off-site backup setup in order to really have stopped this process. As for color blindness... not sure I understand that one, did someone splice a wire wrong somewhere or something because they were color blind? Cause if so that's hilarious. Color blind people probably shouldn't be involved in wiring.
Once the power went down, we placed a call to our host to check in on the issue. The person we spoke with looked at our servers and told us the lights were on and the servers were running. That led us to believe that the issues continued to be on their end.

While the lights were indeed on, they weren't green -- the desired color. Unfortunately, we later found out that the person who checked is color blind and couldn't tell the difference between green and whatever color the servers showed while in standby.

Like I previously stated: Everything was the perfect storm of bad.
Oh man, that's even better. So you're saying the guy couldn't tell the difference between green and amber lights and was just like "YEAH BRO! LIGHTS ARE ON! MUST BE YOUR PROBLEM!". That's failure at a higher level than I could have even imagined.
That would be correct.

We were told the lights were on and servers were running. As a result, we believed the issues to be on their end given that it was a power outage that started the entire process.
ouch, that sucks... did the janitor answer the phone?
Total assumption on my part, but I'm guessing the fact that it was Memorial Day weekend didn't help things as far as the A-Team being on duty.
Probably correct... it was probably some college intern who was playing Call of Duty and didn't want to be bothered :cool:

 
Once the power went down, we placed a call to our host to check in on the issue. The person we spoke with looked at our servers and told us the lights were on and the servers were running. That led us to believe that the issues continued to be on their end.

While the lights were indeed on, they weren't green -- the desired color. Unfortunately, we later found out that the person who checked is color blind and couldn't tell the difference between green and whatever color the servers showed while in standby.
I have to admit, there's some humor in this.

 
Last edited by a moderator:
Once the power went down, we placed a call to our host to check in on the issue. The person we spoke with looked at our servers and told us the lights were on and the servers were running. That led us to believe that the issues continued to be on their end.

While the lights were indeed on, they weren't green -- the desired color. Unfortunately, we later found out that the person who checked is color blind and couldn't tell the difference between green and whatever color the servers showed while in standby.
I have to admit, there's some humor in this.
:goodposting:

This is schtick, right?

 
It's kind of down again. I say "kind of" because I can access some leagues but not others.

I know league mates in 1 league were having the same issue yesterday because 1 emailed about it. At the time, I checked and had no issues getting on.

 
There's a pretty clear "DANGER" message on the server this morning. However, their IT on-call is dyslexic and frantically trying to google "gander server error".

 
Is anyone else having issues right now? It has been telling me the server is down then the page would work, back and forth for the past hour.

 
Is anyone else having issues right now? It has been telling me the server is down then the page would work, back and forth for the past hour.
All 4 of my MFL-based leagues are opening just fine.

Maybe you're in a league that's connected by a green wire.

 
Is anyone else having issues right now? It has been telling me the server is down then the page would work, back and forth for the past hour.
All 4 of my MFL-based leagues are opening just fine.

Maybe you're in a league that's connected by a green wire.
Must be, it is doing it again today.

Service Temporarily UnavailableThe server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
 

Users who are viewing this thread

Back
Top